Dec 16 12:35:26.800890 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:35:26.800914 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:35:26.800923 kernel: KASLR enabled Dec 16 12:35:26.800929 kernel: efi: EFI v2.7 by EDK II Dec 16 12:35:26.800934 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:35:26.800940 kernel: random: crng init done Dec 16 12:35:26.800946 kernel: secureboot: Secure boot disabled Dec 16 12:35:26.800952 kernel: ACPI: Early table checksum verification disabled Dec 16 12:35:26.800960 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:35:26.800967 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:35:26.800973 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.800978 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.800984 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.800990 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.800997 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.801004 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.801010 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.801016 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.801022 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:35:26.801028 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:35:26.801034 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:35:26.801040 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:35:26.801046 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:35:26.801052 kernel: Zone ranges: Dec 16 12:35:26.801058 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:35:26.801066 kernel: DMA32 empty Dec 16 12:35:26.801072 kernel: Normal empty Dec 16 12:35:26.801078 kernel: Device empty Dec 16 12:35:26.801083 kernel: Movable zone start for each node Dec 16 12:35:26.801089 kernel: Early memory node ranges Dec 16 12:35:26.801095 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:35:26.801102 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:35:26.801108 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:35:26.801114 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:35:26.801119 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:35:26.801125 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:35:26.801131 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:35:26.801139 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:35:26.801145 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:35:26.801151 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:35:26.801160 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:35:26.801167 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:35:26.801173 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:35:26.801181 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:35:26.801187 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:35:26.801194 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:35:26.801200 kernel: psci: probing for conduit method from ACPI. Dec 16 12:35:26.801206 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:35:26.801212 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:35:26.801219 kernel: psci: Trusted OS migration not required Dec 16 12:35:26.801225 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:35:26.801232 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:35:26.801238 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:35:26.801246 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:35:26.801253 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:35:26.801259 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:35:26.801266 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:35:26.801272 kernel: CPU features: detected: Spectre-v4 Dec 16 12:35:26.801279 kernel: CPU features: detected: Spectre-BHB Dec 16 12:35:26.801285 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:35:26.801292 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:35:26.801298 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:35:26.801305 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:35:26.801311 kernel: alternatives: applying boot alternatives Dec 16 12:35:26.801318 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:35:26.801326 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:35:26.801333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:35:26.801339 kernel: Fallback order for Node 0: 0 Dec 16 12:35:26.801346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:35:26.801352 kernel: Policy zone: DMA Dec 16 12:35:26.801358 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:35:26.801364 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:35:26.801371 kernel: software IO TLB: area num 4. Dec 16 12:35:26.801377 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:35:26.801384 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:35:26.801390 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:35:26.801398 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:35:26.801405 kernel: rcu: RCU event tracing is enabled. Dec 16 12:35:26.801412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:35:26.801419 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:35:26.801425 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:35:26.801431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:35:26.801438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:35:26.801444 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:35:26.801451 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:35:26.801457 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:35:26.801464 kernel: GICv3: 256 SPIs implemented Dec 16 12:35:26.801472 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:35:26.801478 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:35:26.801485 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:35:26.801491 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:35:26.801498 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:35:26.801504 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:35:26.801511 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:35:26.801518 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:35:26.801524 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:35:26.801541 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:35:26.801553 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:35:26.801560 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:35:26.801569 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:35:26.801577 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:35:26.801585 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:35:26.801592 kernel: arm-pv: using stolen time PV Dec 16 12:35:26.801600 kernel: Console: colour dummy device 80x25 Dec 16 12:35:26.801609 kernel: ACPI: Core revision 20240827 Dec 16 12:35:26.801618 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:35:26.801626 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:35:26.801632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:35:26.801639 kernel: landlock: Up and running. Dec 16 12:35:26.801650 kernel: SELinux: Initializing. Dec 16 12:35:26.801657 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:35:26.801664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:35:26.801673 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:35:26.801680 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:35:26.801687 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:35:26.801693 kernel: Remapping and enabling EFI services. Dec 16 12:35:26.801700 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:35:26.801707 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:35:26.801730 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:35:26.801737 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:35:26.801744 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:35:26.801753 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:35:26.801760 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:35:26.801767 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:35:26.801774 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:35:26.801781 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:35:26.801790 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:35:26.801797 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:35:26.801805 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:35:26.801812 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:35:26.801820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:35:26.801830 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:35:26.801837 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:35:26.801844 kernel: SMP: Total of 4 processors activated. Dec 16 12:35:26.801851 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:35:26.801860 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:35:26.801867 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:35:26.801874 kernel: CPU features: detected: Common not Private translations Dec 16 12:35:26.801881 kernel: CPU features: detected: CRC32 instructions Dec 16 12:35:26.801888 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:35:26.801896 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:35:26.801903 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:35:26.801910 kernel: CPU features: detected: Privileged Access Never Dec 16 12:35:26.801916 kernel: CPU features: detected: RAS Extension Support Dec 16 12:35:26.801925 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:35:26.801932 kernel: alternatives: applying system-wide alternatives Dec 16 12:35:26.801939 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:35:26.801946 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 16 12:35:26.801953 kernel: devtmpfs: initialized Dec 16 12:35:26.801960 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:35:26.801967 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:35:26.801974 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:35:26.801982 kernel: 0 pages in range for non-PLT usage Dec 16 12:35:26.801990 kernel: 508400 pages in range for PLT usage Dec 16 12:35:26.801997 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:35:26.802004 kernel: SMBIOS 3.0.0 present. Dec 16 12:35:26.802011 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:35:26.802018 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:35:26.802025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:35:26.802032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:35:26.802039 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:35:26.802046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:35:26.802055 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:35:26.802062 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Dec 16 12:35:26.802069 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:35:26.802076 kernel: cpuidle: using governor menu Dec 16 12:35:26.802083 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:35:26.802090 kernel: ASID allocator initialised with 32768 entries Dec 16 12:35:26.802097 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:35:26.802109 kernel: Serial: AMBA PL011 UART driver Dec 16 12:35:26.802117 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:35:26.802126 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:35:26.802133 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:35:26.802141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:35:26.802148 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:35:26.802155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:35:26.802162 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:35:26.802169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:35:26.802176 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:35:26.802193 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:35:26.802202 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:35:26.802209 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:35:26.802216 kernel: ACPI: Interpreter enabled Dec 16 12:35:26.802223 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:35:26.802230 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:35:26.802236 kernel: ACPI: CPU0 has been hot-added Dec 16 12:35:26.802243 kernel: ACPI: CPU1 has been hot-added Dec 16 12:35:26.802250 kernel: ACPI: CPU2 has been hot-added Dec 16 12:35:26.802257 kernel: ACPI: CPU3 has been hot-added Dec 16 12:35:26.802264 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:35:26.802272 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:35:26.802279 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:35:26.802416 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:35:26.802482 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:35:26.802552 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:35:26.802615 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:35:26.802675 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:35:26.802687 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:35:26.802694 kernel: PCI host bridge to bus 0000:00 Dec 16 12:35:26.802847 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:35:26.802906 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:35:26.802961 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:35:26.803013 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:35:26.803100 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:35:26.803176 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:35:26.803238 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:35:26.803300 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:35:26.803370 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:35:26.803435 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:35:26.803504 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:35:26.803604 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:35:26.803681 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:35:26.803750 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:35:26.803806 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:35:26.803815 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:35:26.803823 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:35:26.803830 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:35:26.803837 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:35:26.803847 kernel: iommu: Default domain type: Translated Dec 16 12:35:26.803854 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:35:26.803862 kernel: efivars: Registered efivars operations Dec 16 12:35:26.803868 kernel: vgaarb: loaded Dec 16 12:35:26.803875 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:35:26.803882 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:35:26.803889 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:35:26.803896 kernel: pnp: PnP ACPI init Dec 16 12:35:26.803965 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:35:26.803978 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:35:26.803985 kernel: NET: Registered PF_INET protocol family Dec 16 12:35:26.803993 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:35:26.804000 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:35:26.804007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:35:26.804014 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:35:26.804022 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:35:26.804029 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:35:26.804038 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:35:26.804045 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:35:26.804052 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:35:26.804060 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:35:26.804067 kernel: kvm [1]: HYP mode not available Dec 16 12:35:26.804074 kernel: Initialise system trusted keyrings Dec 16 12:35:26.804081 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:35:26.804088 kernel: Key type asymmetric registered Dec 16 12:35:26.804095 kernel: Asymmetric key parser 'x509' registered Dec 16 12:35:26.804104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:35:26.804112 kernel: io scheduler mq-deadline registered Dec 16 12:35:26.804119 kernel: io scheduler kyber registered Dec 16 12:35:26.804126 kernel: io scheduler bfq registered Dec 16 12:35:26.804133 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:35:26.804140 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:35:26.804148 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:35:26.804208 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:35:26.804217 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:35:26.804226 kernel: thunder_xcv, ver 1.0 Dec 16 12:35:26.804234 kernel: thunder_bgx, ver 1.0 Dec 16 12:35:26.804241 kernel: nicpf, ver 1.0 Dec 16 12:35:26.804248 kernel: nicvf, ver 1.0 Dec 16 12:35:26.804323 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:35:26.804382 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:35:26 UTC (1765888526) Dec 16 12:35:26.804392 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:35:26.804399 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:35:26.804408 kernel: watchdog: NMI not fully supported Dec 16 12:35:26.804415 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:35:26.804428 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:35:26.804439 kernel: Segment Routing with IPv6 Dec 16 12:35:26.804449 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:35:26.804458 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:35:26.804465 kernel: Key type dns_resolver registered Dec 16 12:35:26.804473 kernel: registered taskstats version 1 Dec 16 12:35:26.804480 kernel: Loading compiled-in X.509 certificates Dec 16 12:35:26.804487 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:35:26.804496 kernel: Demotion targets for Node 0: null Dec 16 12:35:26.804502 kernel: Key type .fscrypt registered Dec 16 12:35:26.804509 kernel: Key type fscrypt-provisioning registered Dec 16 12:35:26.804516 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:35:26.804523 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:35:26.804536 kernel: ima: No architecture policies found Dec 16 12:35:26.804546 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:35:26.804553 kernel: clk: Disabling unused clocks Dec 16 12:35:26.804560 kernel: PM: genpd: Disabling unused power domains Dec 16 12:35:26.804569 kernel: Warning: unable to open an initial console. Dec 16 12:35:26.804576 kernel: Freeing unused kernel memory: 39552K Dec 16 12:35:26.804583 kernel: Run /init as init process Dec 16 12:35:26.804590 kernel: with arguments: Dec 16 12:35:26.804598 kernel: /init Dec 16 12:35:26.804604 kernel: with environment: Dec 16 12:35:26.804611 kernel: HOME=/ Dec 16 12:35:26.804619 kernel: TERM=linux Dec 16 12:35:26.804627 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:35:26.804638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:35:26.804647 systemd[1]: Detected virtualization kvm. Dec 16 12:35:26.804655 systemd[1]: Detected architecture arm64. Dec 16 12:35:26.804662 systemd[1]: Running in initrd. Dec 16 12:35:26.804670 systemd[1]: No hostname configured, using default hostname. Dec 16 12:35:26.804677 systemd[1]: Hostname set to . Dec 16 12:35:26.804685 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:35:26.804694 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:35:26.804702 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:35:26.804710 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:35:26.804736 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:35:26.804745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:35:26.804753 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:35:26.804762 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:35:26.804773 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:35:26.804782 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:35:26.804790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:35:26.804798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:35:26.804806 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:35:26.804813 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:35:26.804821 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:35:26.804829 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:35:26.804840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:35:26.804848 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:35:26.804856 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:35:26.804864 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:35:26.804872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:35:26.804880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:35:26.804888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:35:26.804896 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:35:26.804905 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:35:26.804913 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:35:26.804921 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:35:26.804929 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:35:26.804936 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:35:26.804958 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:35:26.804965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:35:26.804973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:35:26.804981 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:35:26.804991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:35:26.804999 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:35:26.805028 systemd-journald[244]: Collecting audit messages is disabled. Dec 16 12:35:26.805049 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:35:26.805059 systemd-journald[244]: Journal started Dec 16 12:35:26.805077 systemd-journald[244]: Runtime Journal (/run/log/journal/6fa9d8ccbfd94356a4dcc6dc4730e3a7) is 6M, max 48.5M, 42.4M free. Dec 16 12:35:26.802160 systemd-modules-load[246]: Inserted module 'overlay' Dec 16 12:35:26.808740 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:35:26.814844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:35:26.816861 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:35:26.817946 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 16 12:35:26.818809 kernel: Bridge firewalling registered Dec 16 12:35:26.819206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:35:26.822053 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:35:26.823358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:35:26.834955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:35:26.838420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:35:26.840035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:35:26.844265 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:35:26.847878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:35:26.850393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:35:26.852562 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:35:26.853594 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:35:26.861929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:35:26.864877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:35:26.877329 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:35:26.903104 systemd-resolved[290]: Positive Trust Anchors: Dec 16 12:35:26.903125 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:35:26.903157 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:35:26.908377 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 16 12:35:26.911328 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:35:26.913917 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:35:26.959773 kernel: SCSI subsystem initialized Dec 16 12:35:26.964751 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:35:26.972746 kernel: iscsi: registered transport (tcp) Dec 16 12:35:26.985803 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:35:26.985855 kernel: QLogic iSCSI HBA Driver Dec 16 12:35:27.004129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:35:27.029136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:35:27.030641 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:35:27.082466 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:35:27.086857 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:35:27.145773 kernel: raid6: neonx8 gen() 15487 MB/s Dec 16 12:35:27.162791 kernel: raid6: neonx4 gen() 15670 MB/s Dec 16 12:35:27.179769 kernel: raid6: neonx2 gen() 13146 MB/s Dec 16 12:35:27.196776 kernel: raid6: neonx1 gen() 10267 MB/s Dec 16 12:35:27.213771 kernel: raid6: int64x8 gen() 6889 MB/s Dec 16 12:35:27.230764 kernel: raid6: int64x4 gen() 7341 MB/s Dec 16 12:35:27.247756 kernel: raid6: int64x2 gen() 6092 MB/s Dec 16 12:35:27.264777 kernel: raid6: int64x1 gen() 5036 MB/s Dec 16 12:35:27.264800 kernel: raid6: using algorithm neonx4 gen() 15670 MB/s Dec 16 12:35:27.282747 kernel: raid6: .... xor() 12369 MB/s, rmw enabled Dec 16 12:35:27.282773 kernel: raid6: using neon recovery algorithm Dec 16 12:35:27.288118 kernel: xor: measuring software checksum speed Dec 16 12:35:27.288144 kernel: 8regs : 20849 MB/sec Dec 16 12:35:27.288742 kernel: 32regs : 21641 MB/sec Dec 16 12:35:27.289805 kernel: arm64_neon : 24513 MB/sec Dec 16 12:35:27.289819 kernel: xor: using function: arm64_neon (24513 MB/sec) Dec 16 12:35:27.341751 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:35:27.349799 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:35:27.352646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:35:27.377327 systemd-udevd[498]: Using default interface naming scheme 'v255'. Dec 16 12:35:27.381805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:35:27.385126 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:35:27.413209 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Dec 16 12:35:27.437730 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:35:27.440055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:35:27.496196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:35:27.500469 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:35:27.552747 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:35:27.558640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:35:27.561889 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:35:27.561934 kernel: GPT:9289727 != 19775487 Dec 16 12:35:27.561945 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:35:27.562927 kernel: GPT:9289727 != 19775487 Dec 16 12:35:27.564365 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:35:27.565785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:35:27.573521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:35:27.573670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:35:27.586375 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:35:27.591616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:35:27.606010 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:35:27.619937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:35:27.622713 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:35:27.625915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:35:27.648825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:35:27.655018 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:35:27.656245 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:35:27.659069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:35:27.661146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:35:27.663497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:35:27.666177 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:35:27.668068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:35:27.693069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:35:27.696171 disk-uuid[591]: Primary Header is updated. Dec 16 12:35:27.696171 disk-uuid[591]: Secondary Entries is updated. Dec 16 12:35:27.696171 disk-uuid[591]: Secondary Header is updated. Dec 16 12:35:27.699157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:35:28.711818 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:35:28.712625 disk-uuid[599]: The operation has completed successfully. Dec 16 12:35:28.743674 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:35:28.743802 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:35:28.766838 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:35:28.792094 sh[611]: Success Dec 16 12:35:28.805780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:35:28.805863 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:35:28.807137 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:35:28.815752 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:35:28.847259 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:35:28.850311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:35:28.872290 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:35:28.881259 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (625) Dec 16 12:35:28.881295 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:35:28.881306 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:35:28.887160 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:35:28.887202 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:35:28.888610 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:35:28.889983 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:35:28.891337 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:35:28.892219 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:35:28.893968 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:35:28.916494 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (648) Dec 16 12:35:28.916566 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:35:28.916578 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:35:28.920177 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:35:28.920216 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:35:28.924735 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:35:28.925871 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:35:28.928817 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:35:29.026780 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:35:29.029873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:35:29.046488 ignition[669]: Ignition 2.22.0 Dec 16 12:35:29.046505 ignition[669]: Stage: fetch-offline Dec 16 12:35:29.046546 ignition[669]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:29.046557 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:29.046651 ignition[669]: parsed url from cmdline: "" Dec 16 12:35:29.046655 ignition[669]: no config URL provided Dec 16 12:35:29.046660 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:35:29.046667 ignition[669]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:35:29.046692 ignition[669]: op(1): [started] loading QEMU firmware config module Dec 16 12:35:29.046696 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:35:29.053740 ignition[669]: op(1): [finished] loading QEMU firmware config module Dec 16 12:35:29.075624 systemd-networkd[804]: lo: Link UP Dec 16 12:35:29.075638 systemd-networkd[804]: lo: Gained carrier Dec 16 12:35:29.076376 systemd-networkd[804]: Enumeration completed Dec 16 12:35:29.076503 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:35:29.076798 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:35:29.076802 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:35:29.077801 systemd-networkd[804]: eth0: Link UP Dec 16 12:35:29.077956 systemd-networkd[804]: eth0: Gained carrier Dec 16 12:35:29.077965 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:35:29.078397 systemd[1]: Reached target network.target - Network. Dec 16 12:35:29.107789 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:35:29.115217 ignition[669]: parsing config with SHA512: aa6fc7b7f1dc9568e217a80b1ea022eb068d4be8e88815c6d369ca06a09315ebecaeceb57ac8ea345e2536d64bf13b1a0866e54aacae4ef738a042a048293f2d Dec 16 12:35:29.121613 unknown[669]: fetched base config from "system" Dec 16 12:35:29.121625 unknown[669]: fetched user config from "qemu" Dec 16 12:35:29.122061 ignition[669]: fetch-offline: fetch-offline passed Dec 16 12:35:29.122123 ignition[669]: Ignition finished successfully Dec 16 12:35:29.125792 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:35:29.127081 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:35:29.128010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:35:29.163031 ignition[812]: Ignition 2.22.0 Dec 16 12:35:29.163048 ignition[812]: Stage: kargs Dec 16 12:35:29.163199 ignition[812]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:29.163211 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:29.164027 ignition[812]: kargs: kargs passed Dec 16 12:35:29.164079 ignition[812]: Ignition finished successfully Dec 16 12:35:29.168834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:35:29.170959 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:35:29.200913 ignition[820]: Ignition 2.22.0 Dec 16 12:35:29.200932 ignition[820]: Stage: disks Dec 16 12:35:29.201069 ignition[820]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:29.201078 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:29.201904 ignition[820]: disks: disks passed Dec 16 12:35:29.204634 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:35:29.201959 ignition[820]: Ignition finished successfully Dec 16 12:35:29.206825 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:35:29.208057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:35:29.209769 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:35:29.211201 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:35:29.212979 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:35:29.215557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:35:29.239266 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:35:29.344254 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:35:29.347951 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:35:29.431744 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:35:29.432491 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:35:29.434832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:35:29.439812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:35:29.442828 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:35:29.443819 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:35:29.443866 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:35:29.443892 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:35:29.458629 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:35:29.460988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:35:29.466736 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Dec 16 12:35:29.469984 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:35:29.470017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:35:29.473192 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:35:29.473247 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:35:29.475143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:35:29.503671 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:35:29.508296 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:35:29.511865 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:35:29.515225 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:35:29.600828 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:35:29.602912 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:35:29.604478 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:35:29.629765 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:35:29.643340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:35:29.658916 ignition[951]: INFO : Ignition 2.22.0 Dec 16 12:35:29.658916 ignition[951]: INFO : Stage: mount Dec 16 12:35:29.660629 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:29.660629 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:29.660629 ignition[951]: INFO : mount: mount passed Dec 16 12:35:29.660629 ignition[951]: INFO : Ignition finished successfully Dec 16 12:35:29.662548 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:35:29.665834 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:35:29.879736 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:35:29.881348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:35:29.907249 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Dec 16 12:35:29.907298 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:35:29.907308 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:35:29.911791 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:35:29.911849 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:35:29.913915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:35:29.949027 ignition[983]: INFO : Ignition 2.22.0 Dec 16 12:35:29.949027 ignition[983]: INFO : Stage: files Dec 16 12:35:29.950470 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:29.950470 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:29.950470 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:35:29.953685 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:35:29.953685 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:35:29.953685 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:35:29.953685 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:35:29.953685 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:35:29.953387 unknown[983]: wrote ssh authorized keys file for user: core Dec 16 12:35:29.960396 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:35:29.960396 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:35:30.033643 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:35:30.226295 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:35:30.226295 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:35:30.230514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:35:30.242592 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:35:30.289896 systemd-networkd[804]: eth0: Gained IPv6LL Dec 16 12:35:30.514879 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:35:30.707830 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:35:30.707830 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:35:30.711358 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:35:30.713424 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:35:30.733786 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:35:30.737460 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:35:30.738891 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:35:30.738891 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:35:30.738891 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:35:30.738891 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:35:30.738891 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:35:30.738891 ignition[983]: INFO : files: files passed Dec 16 12:35:30.738891 ignition[983]: INFO : Ignition finished successfully Dec 16 12:35:30.740802 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:35:30.743771 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:35:30.746898 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:35:30.757482 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:35:30.757601 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:35:30.760798 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:35:30.764179 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:35:30.764179 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:35:30.767106 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:35:30.766380 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:35:30.768592 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:35:30.771327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:35:30.850002 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:35:30.850141 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:35:30.852408 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:35:30.854012 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:35:30.855808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:35:30.856689 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:35:30.888001 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:35:30.890501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:35:30.912123 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:35:30.913442 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:35:30.915384 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:35:30.917020 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:35:30.917169 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:35:30.919642 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:35:30.921601 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:35:30.923259 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:35:30.924952 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:35:30.926815 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:35:30.928876 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:35:30.930736 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:35:30.932575 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:35:30.934635 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:35:30.936600 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:35:30.938325 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:35:30.939870 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:35:30.940015 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:35:30.942156 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:35:30.943866 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:35:30.945792 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:35:30.946827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:35:30.948679 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:35:30.948822 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:35:30.951591 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:35:30.951751 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:35:30.953816 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:35:30.955627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:35:30.960803 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:35:30.962173 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:35:30.964064 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:35:30.965641 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:35:30.965760 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:35:30.967082 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:35:30.967160 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:35:30.968687 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:35:30.968877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:35:30.970512 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:35:30.970688 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:35:30.973042 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:35:30.974522 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:35:30.974653 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:35:30.977429 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:35:30.978773 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:35:30.978908 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:35:30.980738 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:35:30.980843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:35:30.986391 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:35:30.989894 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:35:30.999501 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:35:31.008355 ignition[1037]: INFO : Ignition 2.22.0 Dec 16 12:35:31.008355 ignition[1037]: INFO : Stage: umount Dec 16 12:35:31.010193 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:35:31.010193 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:35:31.010193 ignition[1037]: INFO : umount: umount passed Dec 16 12:35:31.010193 ignition[1037]: INFO : Ignition finished successfully Dec 16 12:35:31.011475 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:35:31.011621 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:35:31.013268 systemd[1]: Stopped target network.target - Network. Dec 16 12:35:31.014582 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:35:31.014655 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:35:31.017260 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:35:31.017308 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:35:31.019126 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:35:31.019178 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:35:31.020919 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:35:31.020963 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:35:31.022898 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:35:31.024448 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:35:31.030550 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:35:31.030698 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:35:31.034983 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:35:31.035294 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:35:31.035334 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:35:31.038598 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:35:31.041472 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:35:31.041608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:35:31.045244 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:35:31.045405 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:35:31.047701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:35:31.047751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:35:31.051671 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:35:31.053542 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:35:31.053607 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:35:31.055998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:35:31.056051 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:35:31.060422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:35:31.060482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:35:31.062401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:35:31.065847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:35:31.066204 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:35:31.066305 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:35:31.070131 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:35:31.070194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:35:31.081975 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:35:31.082176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:35:31.093639 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:35:31.093835 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:35:31.096187 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:35:31.096227 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:35:31.097975 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:35:31.098007 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:35:31.099963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:35:31.100024 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:35:31.102641 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:35:31.102697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:35:31.105377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:35:31.105435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:35:31.109371 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:35:31.110652 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:35:31.110751 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:35:31.114206 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:35:31.114256 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:35:31.117504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:35:31.117576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:35:31.130983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:35:31.131106 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:35:31.133505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:35:31.136354 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:35:31.146991 systemd[1]: Switching root. Dec 16 12:35:31.188978 systemd-journald[244]: Journal stopped Dec 16 12:35:31.988714 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 16 12:35:31.991312 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:35:31.991329 kernel: SELinux: policy capability open_perms=1 Dec 16 12:35:31.991339 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:35:31.991353 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:35:31.991364 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:35:31.991373 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:35:31.991383 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:35:31.991393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:35:31.991402 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:35:31.991411 kernel: audit: type=1403 audit(1765888531.349:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:35:31.991423 systemd[1]: Successfully loaded SELinux policy in 48.870ms. Dec 16 12:35:31.991443 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.566ms. Dec 16 12:35:31.991455 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:35:31.991466 systemd[1]: Detected virtualization kvm. Dec 16 12:35:31.991476 systemd[1]: Detected architecture arm64. Dec 16 12:35:31.991487 systemd[1]: Detected first boot. Dec 16 12:35:31.991497 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:35:31.991507 zram_generator::config[1083]: No configuration found. Dec 16 12:35:31.991554 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:35:31.991565 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:35:31.991577 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:35:31.991587 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:35:31.991597 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:35:31.991608 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:35:31.991621 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:35:31.991631 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:35:31.991641 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:35:31.991652 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:35:31.991662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:35:31.991673 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:35:31.991684 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:35:31.991694 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:35:31.991706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:35:31.991729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:35:31.991742 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:35:31.991752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:35:31.991763 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:35:31.991773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:35:31.991785 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:35:31.991795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:35:31.991808 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:35:31.991818 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:35:31.991829 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:35:31.991839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:35:31.991849 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:35:31.991859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:35:31.991870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:35:31.991881 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:35:31.991891 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:35:31.991902 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:35:31.991913 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:35:31.991923 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:35:31.991933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:35:31.991944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:35:31.991954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:35:31.991964 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:35:31.991974 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:35:31.991984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:35:31.991996 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:35:31.992006 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:35:31.992016 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:35:31.992026 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:35:31.992037 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:35:31.992047 systemd[1]: Reached target machines.target - Containers. Dec 16 12:35:31.992058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:35:31.992068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:35:31.992080 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:35:31.992091 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:35:31.992101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:35:31.992111 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:35:31.992122 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:35:31.992131 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:35:31.992141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:35:31.992152 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:35:31.992163 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:35:31.992174 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:35:31.992184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:35:31.992195 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:35:31.992206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:35:31.992215 kernel: fuse: init (API version 7.41) Dec 16 12:35:31.992225 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:35:31.992235 kernel: loop: module loaded Dec 16 12:35:31.992244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:35:31.992254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:35:31.992266 kernel: ACPI: bus type drm_connector registered Dec 16 12:35:31.992276 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:35:31.992286 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:35:31.992297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:35:31.992307 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:35:31.992316 systemd[1]: Stopped verity-setup.service. Dec 16 12:35:31.992328 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:35:31.992338 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:35:31.992386 systemd-journald[1148]: Collecting audit messages is disabled. Dec 16 12:35:31.992414 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:35:31.992428 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:35:31.992440 systemd-journald[1148]: Journal started Dec 16 12:35:31.992467 systemd-journald[1148]: Runtime Journal (/run/log/journal/6fa9d8ccbfd94356a4dcc6dc4730e3a7) is 6M, max 48.5M, 42.4M free. Dec 16 12:35:31.748871 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:35:31.770295 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:35:31.770728 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:35:31.994899 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:35:31.995778 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:35:31.996958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:35:31.998305 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:35:31.999793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:35:32.001158 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:35:32.001413 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:35:32.002786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:35:32.002945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:35:32.004242 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:35:32.004430 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:35:32.005673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:35:32.005873 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:35:32.007160 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:35:32.007323 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:35:32.008669 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:35:32.008842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:35:32.010090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:35:32.013114 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:35:32.014713 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:35:32.016135 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:35:32.028416 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:35:32.031001 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:35:32.033236 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:35:32.034604 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:35:32.034635 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:35:32.036487 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:35:32.044636 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:35:32.045840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:35:32.047182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:35:32.049161 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:35:32.050538 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:35:32.053890 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:35:32.055054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:35:32.055988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:35:32.059858 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:35:32.063607 systemd-journald[1148]: Time spent on flushing to /var/log/journal/6fa9d8ccbfd94356a4dcc6dc4730e3a7 is 21.862ms for 882 entries. Dec 16 12:35:32.063607 systemd-journald[1148]: System Journal (/var/log/journal/6fa9d8ccbfd94356a4dcc6dc4730e3a7) is 8M, max 195.6M, 187.6M free. Dec 16 12:35:32.095898 systemd-journald[1148]: Received client request to flush runtime journal. Dec 16 12:35:32.062959 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:35:32.067155 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:35:32.068457 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:35:32.070009 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:35:32.073750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:35:32.079065 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:35:32.083032 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:35:32.092781 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:35:32.098821 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:35:32.103809 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:35:32.105315 kernel: loop0: detected capacity change from 0 to 119840 Dec 16 12:35:32.109425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:35:32.115797 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:35:32.119747 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:35:32.135777 kernel: loop1: detected capacity change from 0 to 200800 Dec 16 12:35:32.136210 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 16 12:35:32.136224 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Dec 16 12:35:32.140450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:35:32.161121 kernel: loop2: detected capacity change from 0 to 100632 Dec 16 12:35:32.194788 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:35:32.201993 kernel: loop4: detected capacity change from 0 to 200800 Dec 16 12:35:32.208760 kernel: loop5: detected capacity change from 0 to 100632 Dec 16 12:35:32.214066 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:35:32.214837 (sd-merge)[1223]: Merged extensions into '/usr'. Dec 16 12:35:32.219480 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:35:32.219498 systemd[1]: Reloading... Dec 16 12:35:32.278762 zram_generator::config[1250]: No configuration found. Dec 16 12:35:32.364773 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:35:32.439037 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:35:32.439331 systemd[1]: Reloading finished in 219 ms. Dec 16 12:35:32.469470 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:35:32.472796 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:35:32.503258 systemd[1]: Starting ensure-sysext.service... Dec 16 12:35:32.505157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:35:32.511781 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:35:32.517593 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:35:32.518915 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:35:32.518932 systemd[1]: Reloading... Dec 16 12:35:32.521417 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:35:32.521769 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:35:32.522020 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:35:32.522212 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:35:32.522852 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:35:32.523057 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 12:35:32.523108 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 12:35:32.525962 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:35:32.525975 systemd-tmpfiles[1285]: Skipping /boot Dec 16 12:35:32.531932 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:35:32.531943 systemd-tmpfiles[1285]: Skipping /boot Dec 16 12:35:32.552422 systemd-udevd[1288]: Using default interface naming scheme 'v255'. Dec 16 12:35:32.570859 zram_generator::config[1313]: No configuration found. Dec 16 12:35:32.758156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:35:32.759745 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:35:32.759826 systemd[1]: Reloading finished in 240 ms. Dec 16 12:35:32.771471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:35:32.780562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:35:32.811771 systemd[1]: Finished ensure-sysext.service. Dec 16 12:35:32.817006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:35:32.819538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:35:32.820835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:35:32.848430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:35:32.851232 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:35:32.855147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:35:32.857851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:35:32.859211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:35:32.861031 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:35:32.862292 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:35:32.863794 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:35:32.867873 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:35:32.877863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:35:32.883807 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:35:32.886988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:35:32.889939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:35:32.891770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:35:32.891957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:35:32.893671 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:35:32.896999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:35:32.898493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:35:32.898669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:35:32.900793 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:35:32.900973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:35:32.908252 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:35:32.914945 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:35:32.918404 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:35:32.919553 augenrules[1437]: No rules Dec 16 12:35:32.921208 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:35:32.921450 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:35:32.927312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:35:32.927477 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:35:32.928937 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:35:32.931834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:35:32.932854 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:35:32.934741 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:35:32.947821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:35:32.960018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:35:32.972912 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:35:33.034475 systemd-networkd[1411]: lo: Link UP Dec 16 12:35:33.034486 systemd-networkd[1411]: lo: Gained carrier Dec 16 12:35:33.035407 systemd-networkd[1411]: Enumeration completed Dec 16 12:35:33.035515 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:35:33.035893 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:35:33.035903 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:35:33.036518 systemd-networkd[1411]: eth0: Link UP Dec 16 12:35:33.036773 systemd-networkd[1411]: eth0: Gained carrier Dec 16 12:35:33.036795 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:35:33.038131 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:35:33.039910 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:35:33.042232 systemd-resolved[1415]: Positive Trust Anchors: Dec 16 12:35:33.042254 systemd-resolved[1415]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:35:33.042287 systemd-resolved[1415]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:35:33.042289 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:35:33.044687 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:35:33.049204 systemd-resolved[1415]: Defaulting to hostname 'linux'. Dec 16 12:35:33.050765 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:35:33.051860 systemd[1]: Reached target network.target - Network. Dec 16 12:35:33.052709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:35:33.053712 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:35:33.054790 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:35:33.055970 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:35:33.057306 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:35:33.058436 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:35:33.059638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:35:33.060829 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:35:33.060868 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:35:33.061662 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:35:33.063270 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:35:33.064808 systemd-networkd[1411]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:35:33.065890 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:35:33.067473 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Dec 16 12:35:33.068696 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:35:33.534849 systemd-resolved[1415]: Clock change detected. Flushing caches. Dec 16 12:35:33.534886 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:35:33.535052 systemd-timesyncd[1421]: Initial clock synchronization to Tue 2025-12-16 12:35:33.534791 UTC. Dec 16 12:35:33.535838 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:35:33.536915 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:35:33.540904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:35:33.542280 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:35:33.545613 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:35:33.547427 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:35:33.549294 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:35:33.550365 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:35:33.551330 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:35:33.551379 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:35:33.552543 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:35:33.554710 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:35:33.556522 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:35:33.566579 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:35:33.568693 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:35:33.569733 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:35:33.570933 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:35:33.574672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:35:33.576385 jq[1470]: false Dec 16 12:35:33.576984 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:35:33.579724 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:35:33.584637 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:35:33.588610 extend-filesystems[1471]: Found /dev/vda6 Dec 16 12:35:33.587128 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:35:33.587848 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:35:33.588801 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:35:33.591447 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:35:33.594628 extend-filesystems[1471]: Found /dev/vda9 Dec 16 12:35:33.596629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:35:33.597976 extend-filesystems[1471]: Checking size of /dev/vda9 Dec 16 12:35:33.598667 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:35:33.598877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:35:33.600640 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:35:33.600840 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:35:33.602269 jq[1488]: true Dec 16 12:35:33.604058 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:35:33.604272 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:35:33.615701 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:35:33.623625 update_engine[1483]: I20251216 12:35:33.621729 1483 main.cc:92] Flatcar Update Engine starting Dec 16 12:35:33.630843 extend-filesystems[1471]: Resized partition /dev/vda9 Dec 16 12:35:33.631948 tar[1493]: linux-arm64/LICENSE Dec 16 12:35:33.633121 tar[1493]: linux-arm64/helm Dec 16 12:35:33.634129 extend-filesystems[1509]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:35:33.642644 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:35:33.648871 jq[1495]: true Dec 16 12:35:33.670648 dbus-daemon[1468]: [system] SELinux support is enabled Dec 16 12:35:33.694764 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:35:33.670854 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:35:33.694899 update_engine[1483]: I20251216 12:35:33.685859 1483 update_check_scheduler.cc:74] Next update check in 7m14s Dec 16 12:35:33.676576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:35:33.676610 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:35:33.677998 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:35:33.678013 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:35:33.685713 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:35:33.689631 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:35:33.696692 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:35:33.696692 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:35:33.696692 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:35:33.705704 extend-filesystems[1471]: Resized filesystem in /dev/vda9 Dec 16 12:35:33.698204 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:35:33.701648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:35:33.713310 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:35:33.713545 systemd-logind[1480]: New seat seat0. Dec 16 12:35:33.714489 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:35:33.733060 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:35:33.737591 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:35:33.739426 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:35:33.758608 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:35:33.820161 containerd[1496]: time="2025-12-16T12:35:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:35:33.822167 containerd[1496]: time="2025-12-16T12:35:33.822123321Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:35:33.836089 containerd[1496]: time="2025-12-16T12:35:33.836043281Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.48µs" Dec 16 12:35:33.836089 containerd[1496]: time="2025-12-16T12:35:33.836086121Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:35:33.836193 containerd[1496]: time="2025-12-16T12:35:33.836107441Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:35:33.836297 containerd[1496]: time="2025-12-16T12:35:33.836276841Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:35:33.836320 containerd[1496]: time="2025-12-16T12:35:33.836298321Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:35:33.836347 containerd[1496]: time="2025-12-16T12:35:33.836326801Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836417 containerd[1496]: time="2025-12-16T12:35:33.836396961Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836417 containerd[1496]: time="2025-12-16T12:35:33.836414721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836700 containerd[1496]: time="2025-12-16T12:35:33.836675961Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836725 containerd[1496]: time="2025-12-16T12:35:33.836698601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836725 containerd[1496]: time="2025-12-16T12:35:33.836712201Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836725 containerd[1496]: time="2025-12-16T12:35:33.836720441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:35:33.836821 containerd[1496]: time="2025-12-16T12:35:33.836803441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:35:33.837027 containerd[1496]: time="2025-12-16T12:35:33.837006961Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:35:33.837055 containerd[1496]: time="2025-12-16T12:35:33.837040401Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:35:33.837055 containerd[1496]: time="2025-12-16T12:35:33.837051641Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:35:33.837103 containerd[1496]: time="2025-12-16T12:35:33.837088441Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:35:33.837405 containerd[1496]: time="2025-12-16T12:35:33.837387321Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:35:33.837478 containerd[1496]: time="2025-12-16T12:35:33.837461641Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:35:33.844182 containerd[1496]: time="2025-12-16T12:35:33.844134481Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844229201Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844247241Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844258521Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844270841Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844281441Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:35:33.844300 containerd[1496]: time="2025-12-16T12:35:33.844300201Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:35:33.844408 containerd[1496]: time="2025-12-16T12:35:33.844313121Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:35:33.844408 containerd[1496]: time="2025-12-16T12:35:33.844325281Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:35:33.844408 containerd[1496]: time="2025-12-16T12:35:33.844345921Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:35:33.844408 containerd[1496]: time="2025-12-16T12:35:33.844364201Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:35:33.844408 containerd[1496]: time="2025-12-16T12:35:33.844377321Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844523241Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844558361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844573841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844586361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844598801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844609641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844621601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844631841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844643161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844654321Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844664441Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844843601Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844867641Z" level=info msg="Start snapshots syncer" Dec 16 12:35:33.846095 containerd[1496]: time="2025-12-16T12:35:33.844906321Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:35:33.846899 containerd[1496]: time="2025-12-16T12:35:33.846837361Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:35:33.847349 containerd[1496]: time="2025-12-16T12:35:33.847241721Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:35:33.847436 containerd[1496]: time="2025-12-16T12:35:33.847418921Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:35:33.847681 containerd[1496]: time="2025-12-16T12:35:33.847658961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:35:33.847780 containerd[1496]: time="2025-12-16T12:35:33.847765161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:35:33.847833 containerd[1496]: time="2025-12-16T12:35:33.847821161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:35:33.847884 containerd[1496]: time="2025-12-16T12:35:33.847872481Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:35:33.847948 containerd[1496]: time="2025-12-16T12:35:33.847934481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:35:33.848002 containerd[1496]: time="2025-12-16T12:35:33.847989721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:35:33.848054 containerd[1496]: time="2025-12-16T12:35:33.848041921Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:35:33.848142 containerd[1496]: time="2025-12-16T12:35:33.848127081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:35:33.848197 containerd[1496]: time="2025-12-16T12:35:33.848184681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:35:33.848249 containerd[1496]: time="2025-12-16T12:35:33.848237521Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:35:33.848369 containerd[1496]: time="2025-12-16T12:35:33.848348961Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:35:33.848490 containerd[1496]: time="2025-12-16T12:35:33.848437521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:35:33.848544 containerd[1496]: time="2025-12-16T12:35:33.848532081Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:35:33.848622 containerd[1496]: time="2025-12-16T12:35:33.848607881Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:35:33.848669 containerd[1496]: time="2025-12-16T12:35:33.848657441Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:35:33.848716 containerd[1496]: time="2025-12-16T12:35:33.848704801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:35:33.848767 containerd[1496]: time="2025-12-16T12:35:33.848754241Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:35:33.848911 containerd[1496]: time="2025-12-16T12:35:33.848898041Z" level=info msg="runtime interface created" Dec 16 12:35:33.848957 containerd[1496]: time="2025-12-16T12:35:33.848946361Z" level=info msg="created NRI interface" Dec 16 12:35:33.849006 containerd[1496]: time="2025-12-16T12:35:33.848994761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:35:33.849063 containerd[1496]: time="2025-12-16T12:35:33.849051481Z" level=info msg="Connect containerd service" Dec 16 12:35:33.849141 containerd[1496]: time="2025-12-16T12:35:33.849127801Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:35:33.850573 containerd[1496]: time="2025-12-16T12:35:33.850181081Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:35:33.924158 containerd[1496]: time="2025-12-16T12:35:33.924095521Z" level=info msg="Start subscribing containerd event" Dec 16 12:35:33.924158 containerd[1496]: time="2025-12-16T12:35:33.924165881Z" level=info msg="Start recovering state" Dec 16 12:35:33.924279 containerd[1496]: time="2025-12-16T12:35:33.924251641Z" level=info msg="Start event monitor" Dec 16 12:35:33.924279 containerd[1496]: time="2025-12-16T12:35:33.924268441Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:35:33.924279 containerd[1496]: time="2025-12-16T12:35:33.924275241Z" level=info msg="Start streaming server" Dec 16 12:35:33.924330 containerd[1496]: time="2025-12-16T12:35:33.924284521Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:35:33.924330 containerd[1496]: time="2025-12-16T12:35:33.924291641Z" level=info msg="runtime interface starting up..." Dec 16 12:35:33.924330 containerd[1496]: time="2025-12-16T12:35:33.924297121Z" level=info msg="starting plugins..." Dec 16 12:35:33.924330 containerd[1496]: time="2025-12-16T12:35:33.924310121Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:35:33.924786 containerd[1496]: time="2025-12-16T12:35:33.924757801Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:35:33.924910 containerd[1496]: time="2025-12-16T12:35:33.924888601Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:35:33.925156 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:35:33.926732 containerd[1496]: time="2025-12-16T12:35:33.926704321Z" level=info msg="containerd successfully booted in 0.106965s" Dec 16 12:35:33.979021 tar[1493]: linux-arm64/README.md Dec 16 12:35:33.999508 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:35:34.080488 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:35:34.101847 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:35:34.104610 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:35:34.120687 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:35:34.120942 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:35:34.125083 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:35:34.151582 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:35:34.154435 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:35:34.156874 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:35:34.158193 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:35:34.531700 systemd-networkd[1411]: eth0: Gained IPv6LL Dec 16 12:35:34.534164 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:35:34.535914 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:35:34.540219 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:35:34.543946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:34.557852 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:35:34.580785 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:35:34.582931 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:35:34.583144 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:35:34.587311 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:35:35.126159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:35.127615 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:35:35.129646 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:35:35.130629 systemd[1]: Startup finished in 2.129s (kernel) + 4.737s (initrd) + 3.364s (userspace) = 10.231s. Dec 16 12:35:35.446514 kubelet[1599]: E1216 12:35:35.445888 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:35:35.449060 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:35:35.449206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:35:35.449641 systemd[1]: kubelet.service: Consumed 696ms CPU time, 248.8M memory peak. Dec 16 12:35:39.982405 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:35:39.983506 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:46908.service - OpenSSH per-connection server daemon (10.0.0.1:46908). Dec 16 12:35:40.079648 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 46908 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:40.081456 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:40.088408 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:35:40.089437 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:35:40.095175 systemd-logind[1480]: New session 1 of user core. Dec 16 12:35:40.110677 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:35:40.113930 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:35:40.129959 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:35:40.132661 systemd-logind[1480]: New session c1 of user core. Dec 16 12:35:40.245687 systemd[1617]: Queued start job for default target default.target. Dec 16 12:35:40.267793 systemd[1617]: Created slice app.slice - User Application Slice. Dec 16 12:35:40.267829 systemd[1617]: Reached target paths.target - Paths. Dec 16 12:35:40.267873 systemd[1617]: Reached target timers.target - Timers. Dec 16 12:35:40.269283 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:35:40.280731 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:35:40.280865 systemd[1617]: Reached target sockets.target - Sockets. Dec 16 12:35:40.280908 systemd[1617]: Reached target basic.target - Basic System. Dec 16 12:35:40.280936 systemd[1617]: Reached target default.target - Main User Target. Dec 16 12:35:40.280962 systemd[1617]: Startup finished in 141ms. Dec 16 12:35:40.281122 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:35:40.282747 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:35:40.351782 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:46918.service - OpenSSH per-connection server daemon (10.0.0.1:46918). Dec 16 12:35:40.411329 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 46918 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:40.412768 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:40.416779 systemd-logind[1480]: New session 2 of user core. Dec 16 12:35:40.424772 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:35:40.476749 sshd[1631]: Connection closed by 10.0.0.1 port 46918 Dec 16 12:35:40.477182 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Dec 16 12:35:40.485681 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:46918.service: Deactivated successfully. Dec 16 12:35:40.488940 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:35:40.489662 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:35:40.491713 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:46922.service - OpenSSH per-connection server daemon (10.0.0.1:46922). Dec 16 12:35:40.492708 systemd-logind[1480]: Removed session 2. Dec 16 12:35:40.549234 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 46922 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:40.550627 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:40.554642 systemd-logind[1480]: New session 3 of user core. Dec 16 12:35:40.565746 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:35:40.614205 sshd[1642]: Connection closed by 10.0.0.1 port 46922 Dec 16 12:35:40.614594 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Dec 16 12:35:40.635711 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:46922.service: Deactivated successfully. Dec 16 12:35:40.637129 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:35:40.637965 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:35:40.640006 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:46942.service - OpenSSH per-connection server daemon (10.0.0.1:46942). Dec 16 12:35:40.640480 systemd-logind[1480]: Removed session 3. Dec 16 12:35:40.699264 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:40.700675 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:40.704450 systemd-logind[1480]: New session 4 of user core. Dec 16 12:35:40.712744 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:35:40.764813 sshd[1651]: Connection closed by 10.0.0.1 port 46942 Dec 16 12:35:40.765222 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Dec 16 12:35:40.777589 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:46942.service: Deactivated successfully. Dec 16 12:35:40.780059 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:35:40.780792 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:35:40.783358 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Dec 16 12:35:40.784489 systemd-logind[1480]: Removed session 4. Dec 16 12:35:40.843911 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:40.845199 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:40.849882 systemd-logind[1480]: New session 5 of user core. Dec 16 12:35:40.859802 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:35:40.919268 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:35:40.919586 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:35:40.931640 sudo[1661]: pam_unix(sudo:session): session closed for user root Dec 16 12:35:40.933601 sshd[1660]: Connection closed by 10.0.0.1 port 46954 Dec 16 12:35:40.934797 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Dec 16 12:35:40.953200 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:46954.service: Deactivated successfully. Dec 16 12:35:40.955082 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:35:40.955891 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:35:40.958173 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:49682.service - OpenSSH per-connection server daemon (10.0.0.1:49682). Dec 16 12:35:40.959295 systemd-logind[1480]: Removed session 5. Dec 16 12:35:41.016156 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 49682 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:41.017878 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:41.021874 systemd-logind[1480]: New session 6 of user core. Dec 16 12:35:41.027774 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:35:41.080115 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:35:41.080454 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:35:41.086143 sudo[1672]: pam_unix(sudo:session): session closed for user root Dec 16 12:35:41.091819 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:35:41.092091 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:35:41.102970 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:35:41.146563 augenrules[1694]: No rules Dec 16 12:35:41.148009 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:35:41.149610 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:35:41.151071 sudo[1671]: pam_unix(sudo:session): session closed for user root Dec 16 12:35:41.153620 sshd[1670]: Connection closed by 10.0.0.1 port 49682 Dec 16 12:35:41.154105 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Dec 16 12:35:41.166007 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:49682.service: Deactivated successfully. Dec 16 12:35:41.168186 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:35:41.170177 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:35:41.173202 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:49684.service - OpenSSH per-connection server daemon (10.0.0.1:49684). Dec 16 12:35:41.173764 systemd-logind[1480]: Removed session 6. Dec 16 12:35:41.224668 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 49684 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:35:41.226024 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:35:41.230232 systemd-logind[1480]: New session 7 of user core. Dec 16 12:35:41.247783 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:35:41.298963 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:35:41.299230 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:35:41.600982 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:35:41.629983 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:35:41.861617 dockerd[1727]: time="2025-12-16T12:35:41.861460921Z" level=info msg="Starting up" Dec 16 12:35:41.862440 dockerd[1727]: time="2025-12-16T12:35:41.862415841Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:35:41.874877 dockerd[1727]: time="2025-12-16T12:35:41.874831201Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:35:41.908685 dockerd[1727]: time="2025-12-16T12:35:41.908631521Z" level=info msg="Loading containers: start." Dec 16 12:35:41.919568 kernel: Initializing XFRM netlink socket Dec 16 12:35:42.160404 systemd-networkd[1411]: docker0: Link UP Dec 16 12:35:42.165603 dockerd[1727]: time="2025-12-16T12:35:42.165517761Z" level=info msg="Loading containers: done." Dec 16 12:35:42.182009 dockerd[1727]: time="2025-12-16T12:35:42.181939361Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:35:42.182184 dockerd[1727]: time="2025-12-16T12:35:42.182050161Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:35:42.182184 dockerd[1727]: time="2025-12-16T12:35:42.182146281Z" level=info msg="Initializing buildkit" Dec 16 12:35:42.210196 dockerd[1727]: time="2025-12-16T12:35:42.210104441Z" level=info msg="Completed buildkit initialization" Dec 16 12:35:42.217374 dockerd[1727]: time="2025-12-16T12:35:42.217323281Z" level=info msg="Daemon has completed initialization" Dec 16 12:35:42.217568 dockerd[1727]: time="2025-12-16T12:35:42.217416321Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:35:42.217716 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:35:42.697202 containerd[1496]: time="2025-12-16T12:35:42.697146081Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:35:43.426678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941547593.mount: Deactivated successfully. Dec 16 12:35:44.375115 containerd[1496]: time="2025-12-16T12:35:44.375054801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:44.377245 containerd[1496]: time="2025-12-16T12:35:44.377156121Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 16 12:35:44.378194 containerd[1496]: time="2025-12-16T12:35:44.378083641Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:44.381345 containerd[1496]: time="2025-12-16T12:35:44.381288881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:44.383385 containerd[1496]: time="2025-12-16T12:35:44.383193681Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.6860006s" Dec 16 12:35:44.383385 containerd[1496]: time="2025-12-16T12:35:44.383239281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:35:44.383947 containerd[1496]: time="2025-12-16T12:35:44.383880801Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:35:45.338379 containerd[1496]: time="2025-12-16T12:35:45.338327601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:45.338813 containerd[1496]: time="2025-12-16T12:35:45.338782641Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 16 12:35:45.339932 containerd[1496]: time="2025-12-16T12:35:45.339874121Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:45.343005 containerd[1496]: time="2025-12-16T12:35:45.342946281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:45.344846 containerd[1496]: time="2025-12-16T12:35:45.344767921Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 960.85332ms" Dec 16 12:35:45.344846 containerd[1496]: time="2025-12-16T12:35:45.344810281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:35:45.345357 containerd[1496]: time="2025-12-16T12:35:45.345331081Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:35:45.699666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:35:45.701298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:45.873073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:45.877181 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:35:45.928105 kubelet[2014]: E1216 12:35:45.928039 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:35:45.932114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:35:45.932244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:35:45.933694 systemd[1]: kubelet.service: Consumed 158ms CPU time, 107.5M memory peak. Dec 16 12:35:46.307446 containerd[1496]: time="2025-12-16T12:35:46.307384521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:46.308323 containerd[1496]: time="2025-12-16T12:35:46.308297561Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 16 12:35:46.309582 containerd[1496]: time="2025-12-16T12:35:46.309542801Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:46.313068 containerd[1496]: time="2025-12-16T12:35:46.313037601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:46.314002 containerd[1496]: time="2025-12-16T12:35:46.313970801Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 968.60712ms" Dec 16 12:35:46.314070 containerd[1496]: time="2025-12-16T12:35:46.314008441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:35:46.314430 containerd[1496]: time="2025-12-16T12:35:46.314408281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:35:47.407884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495267887.mount: Deactivated successfully. Dec 16 12:35:47.613622 containerd[1496]: time="2025-12-16T12:35:47.613522761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:47.614414 containerd[1496]: time="2025-12-16T12:35:47.614381001Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 16 12:35:47.615325 containerd[1496]: time="2025-12-16T12:35:47.615290961Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:47.618030 containerd[1496]: time="2025-12-16T12:35:47.617990761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:47.619001 containerd[1496]: time="2025-12-16T12:35:47.618966401Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.30452868s" Dec 16 12:35:47.619048 containerd[1496]: time="2025-12-16T12:35:47.619003641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:35:47.619859 containerd[1496]: time="2025-12-16T12:35:47.619611921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:35:48.149465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203146337.mount: Deactivated successfully. Dec 16 12:35:48.957559 containerd[1496]: time="2025-12-16T12:35:48.957506361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:48.958443 containerd[1496]: time="2025-12-16T12:35:48.958367641Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 16 12:35:48.959223 containerd[1496]: time="2025-12-16T12:35:48.959192721Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:48.962748 containerd[1496]: time="2025-12-16T12:35:48.962703441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:48.964732 containerd[1496]: time="2025-12-16T12:35:48.964676961Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.34502476s" Dec 16 12:35:48.964732 containerd[1496]: time="2025-12-16T12:35:48.964724161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:35:48.965324 containerd[1496]: time="2025-12-16T12:35:48.965203481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:35:49.384103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761576880.mount: Deactivated successfully. Dec 16 12:35:49.387919 containerd[1496]: time="2025-12-16T12:35:49.387862441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:49.388708 containerd[1496]: time="2025-12-16T12:35:49.388669281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 16 12:35:49.389756 containerd[1496]: time="2025-12-16T12:35:49.389700761Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:49.391991 containerd[1496]: time="2025-12-16T12:35:49.391938521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:49.392788 containerd[1496]: time="2025-12-16T12:35:49.392750681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 427.51828ms" Dec 16 12:35:49.392788 containerd[1496]: time="2025-12-16T12:35:49.392793441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:35:49.393287 containerd[1496]: time="2025-12-16T12:35:49.393250961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:35:49.927419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1130775128.mount: Deactivated successfully. Dec 16 12:35:52.523610 containerd[1496]: time="2025-12-16T12:35:52.523336161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:52.524403 containerd[1496]: time="2025-12-16T12:35:52.524011721Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 16 12:35:52.525185 containerd[1496]: time="2025-12-16T12:35:52.525155521Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:52.527803 containerd[1496]: time="2025-12-16T12:35:52.527766681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:35:52.529072 containerd[1496]: time="2025-12-16T12:35:52.529035761Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.135736s" Dec 16 12:35:52.529130 containerd[1496]: time="2025-12-16T12:35:52.529069881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:35:56.182688 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:35:56.184169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:56.347270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:56.359911 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:35:56.395809 kubelet[2176]: E1216 12:35:56.395751 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:35:56.398329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:35:56.398469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:35:56.398782 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.6M memory peak. Dec 16 12:35:57.331692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:57.331838 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.6M memory peak. Dec 16 12:35:57.333970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:57.362441 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... Dec 16 12:35:57.362462 systemd[1]: Reloading... Dec 16 12:35:57.455616 zram_generator::config[2237]: No configuration found. Dec 16 12:35:57.750715 systemd[1]: Reloading finished in 387 ms. Dec 16 12:35:57.801787 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:57.805096 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:35:57.805330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:57.805385 systemd[1]: kubelet.service: Consumed 104ms CPU time, 95.2M memory peak. Dec 16 12:35:57.807072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:35:57.974058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:35:57.978235 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:35:58.013894 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:35:58.013894 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:35:58.015718 kubelet[2281]: I1216 12:35:58.014901 2281 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:35:58.685568 kubelet[2281]: I1216 12:35:58.685518 2281 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:35:58.685568 kubelet[2281]: I1216 12:35:58.685558 2281 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:35:58.686660 kubelet[2281]: I1216 12:35:58.686623 2281 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:35:58.686660 kubelet[2281]: I1216 12:35:58.686648 2281 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:35:58.686890 kubelet[2281]: I1216 12:35:58.686863 2281 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:35:58.810205 kubelet[2281]: E1216 12:35:58.810166 2281 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:35:58.810338 kubelet[2281]: I1216 12:35:58.810318 2281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:35:58.815386 kubelet[2281]: I1216 12:35:58.815365 2281 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:35:58.818131 kubelet[2281]: I1216 12:35:58.818107 2281 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:35:58.818439 kubelet[2281]: I1216 12:35:58.818412 2281 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:35:58.818662 kubelet[2281]: I1216 12:35:58.818496 2281 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:35:58.818793 kubelet[2281]: I1216 12:35:58.818779 2281 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:35:58.818846 kubelet[2281]: I1216 12:35:58.818838 2281 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:35:58.818988 kubelet[2281]: I1216 12:35:58.818976 2281 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:35:58.821568 kubelet[2281]: I1216 12:35:58.821527 2281 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:35:58.822728 kubelet[2281]: I1216 12:35:58.822707 2281 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:35:58.822817 kubelet[2281]: I1216 12:35:58.822806 2281 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:35:58.823445 kubelet[2281]: E1216 12:35:58.823223 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:35:58.823445 kubelet[2281]: I1216 12:35:58.823306 2281 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:35:58.823445 kubelet[2281]: I1216 12:35:58.823327 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:35:58.823812 kubelet[2281]: E1216 12:35:58.823769 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:35:58.824343 kubelet[2281]: I1216 12:35:58.824320 2281 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:35:58.825089 kubelet[2281]: I1216 12:35:58.825050 2281 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:35:58.825089 kubelet[2281]: I1216 12:35:58.825090 2281 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:35:58.825153 kubelet[2281]: W1216 12:35:58.825134 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:35:58.827826 kubelet[2281]: I1216 12:35:58.827794 2281 server.go:1262] "Started kubelet" Dec 16 12:35:58.828678 kubelet[2281]: I1216 12:35:58.828645 2281 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:35:58.828978 kubelet[2281]: I1216 12:35:58.828958 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:35:58.829894 kubelet[2281]: I1216 12:35:58.829217 2281 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:35:58.829894 kubelet[2281]: I1216 12:35:58.829299 2281 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:35:58.829894 kubelet[2281]: I1216 12:35:58.829506 2281 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:35:58.829894 kubelet[2281]: I1216 12:35:58.829632 2281 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:35:58.830647 kubelet[2281]: I1216 12:35:58.830622 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:35:58.832538 kubelet[2281]: E1216 12:35:58.830865 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b249c5da46b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:35:58.827718321 +0000 UTC m=+0.846681081,LastTimestamp:2025-12-16 12:35:58.827718321 +0000 UTC m=+0.846681081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:35:58.832538 kubelet[2281]: E1216 12:35:58.832227 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:35:58.832538 kubelet[2281]: I1216 12:35:58.832259 2281 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:35:58.832538 kubelet[2281]: I1216 12:35:58.832333 2281 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:35:58.832538 kubelet[2281]: I1216 12:35:58.832401 2281 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:35:58.832962 kubelet[2281]: E1216 12:35:58.832603 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Dec 16 12:35:58.833056 kubelet[2281]: E1216 12:35:58.833023 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:35:58.833645 kubelet[2281]: E1216 12:35:58.833614 2281 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:35:58.833871 kubelet[2281]: I1216 12:35:58.833850 2281 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:35:58.833948 kubelet[2281]: I1216 12:35:58.833932 2281 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:35:58.837390 kubelet[2281]: I1216 12:35:58.837346 2281 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:35:58.846141 kubelet[2281]: I1216 12:35:58.846040 2281 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:35:58.846141 kubelet[2281]: I1216 12:35:58.846057 2281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:35:58.846141 kubelet[2281]: I1216 12:35:58.846072 2281 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:35:58.848749 kubelet[2281]: I1216 12:35:58.848713 2281 policy_none.go:49] "None policy: Start" Dec 16 12:35:58.848749 kubelet[2281]: I1216 12:35:58.848753 2281 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:35:58.848848 kubelet[2281]: I1216 12:35:58.848766 2281 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:35:58.850332 kubelet[2281]: I1216 12:35:58.850314 2281 policy_none.go:47] "Start" Dec 16 12:35:58.851735 kubelet[2281]: I1216 12:35:58.851708 2281 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:35:58.852711 kubelet[2281]: I1216 12:35:58.852695 2281 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:35:58.852711 kubelet[2281]: I1216 12:35:58.852711 2281 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:35:58.852793 kubelet[2281]: I1216 12:35:58.852731 2281 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:35:58.852793 kubelet[2281]: E1216 12:35:58.852771 2281 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:35:58.853411 kubelet[2281]: E1216 12:35:58.853361 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:35:58.856177 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:35:58.874667 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:35:58.879071 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:35:58.897811 kubelet[2281]: E1216 12:35:58.897782 2281 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:35:58.898025 kubelet[2281]: I1216 12:35:58.898011 2281 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:35:58.898064 kubelet[2281]: I1216 12:35:58.898026 2281 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:35:58.898425 kubelet[2281]: I1216 12:35:58.898408 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:35:58.899122 kubelet[2281]: E1216 12:35:58.899098 2281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:35:58.899199 kubelet[2281]: E1216 12:35:58.899139 2281 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:35:58.966750 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 16 12:35:58.991156 kubelet[2281]: E1216 12:35:58.991100 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:35:58.994967 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 16 12:35:58.999320 kubelet[2281]: E1216 12:35:58.999264 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:35:59.000051 systemd[1]: Created slice kubepods-burstable-pod76082254f8313679032988c2daec9cd9.slice - libcontainer container kubepods-burstable-pod76082254f8313679032988c2daec9cd9.slice. Dec 16 12:35:59.001365 kubelet[2281]: I1216 12:35:59.000945 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:35:59.001471 kubelet[2281]: E1216 12:35:59.001434 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 16 12:35:59.002119 kubelet[2281]: E1216 12:35:59.002091 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:35:59.033625 kubelet[2281]: E1216 12:35:59.033589 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Dec 16 12:35:59.134038 kubelet[2281]: I1216 12:35:59.133949 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:35:59.134038 kubelet[2281]: I1216 12:35:59.133995 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:35:59.134038 kubelet[2281]: I1216 12:35:59.134011 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:35:59.134247 kubelet[2281]: I1216 12:35:59.134066 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:35:59.134247 kubelet[2281]: I1216 12:35:59.134120 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:35:59.134247 kubelet[2281]: I1216 12:35:59.134197 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:35:59.134247 kubelet[2281]: I1216 12:35:59.134214 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:35:59.134247 kubelet[2281]: I1216 12:35:59.134228 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:35:59.134406 kubelet[2281]: I1216 12:35:59.134252 2281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:35:59.203287 kubelet[2281]: I1216 12:35:59.203230 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:35:59.203674 kubelet[2281]: E1216 12:35:59.203625 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 16 12:35:59.295005 containerd[1496]: time="2025-12-16T12:35:59.294912441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 16 12:35:59.303462 containerd[1496]: time="2025-12-16T12:35:59.303384881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 16 12:35:59.305492 containerd[1496]: time="2025-12-16T12:35:59.305460841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76082254f8313679032988c2daec9cd9,Namespace:kube-system,Attempt:0,}" Dec 16 12:35:59.435094 kubelet[2281]: E1216 12:35:59.435025 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Dec 16 12:35:59.605112 kubelet[2281]: I1216 12:35:59.605011 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:35:59.605605 kubelet[2281]: E1216 12:35:59.605573 2281 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 16 12:35:59.757891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303514530.mount: Deactivated successfully. Dec 16 12:35:59.767191 containerd[1496]: time="2025-12-16T12:35:59.767126361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:35:59.767869 containerd[1496]: time="2025-12-16T12:35:59.767839401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:35:59.771754 containerd[1496]: time="2025-12-16T12:35:59.771683761Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:35:59.774773 containerd[1496]: time="2025-12-16T12:35:59.774723481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:35:59.777280 containerd[1496]: time="2025-12-16T12:35:59.776657721Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:35:59.777465 containerd[1496]: time="2025-12-16T12:35:59.777445361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:35:59.778306 containerd[1496]: time="2025-12-16T12:35:59.778270881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:35:59.780507 containerd[1496]: time="2025-12-16T12:35:59.780462801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:35:59.782009 containerd[1496]: time="2025-12-16T12:35:59.781967001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 483.79944ms" Dec 16 12:35:59.784395 containerd[1496]: time="2025-12-16T12:35:59.784333201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 476.58656ms" Dec 16 12:35:59.789868 containerd[1496]: time="2025-12-16T12:35:59.789621921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.17492ms" Dec 16 12:35:59.812853 containerd[1496]: time="2025-12-16T12:35:59.812775121Z" level=info msg="connecting to shim a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb" address="unix:///run/containerd/s/4da01265593aab4f562a8eb204751a76d2f8aafea0d8d44f8449b1cb6e1422e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:35:59.819969 containerd[1496]: time="2025-12-16T12:35:59.819920481Z" level=info msg="connecting to shim c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0" address="unix:///run/containerd/s/f06fd721b22327c99aedc565af54e5b4360b67dc7b041c28428117df1a3fb8d1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:35:59.838970 containerd[1496]: time="2025-12-16T12:35:59.838813241Z" level=info msg="connecting to shim 23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45" address="unix:///run/containerd/s/08f38b8dc16fcc889a2f185b46b75e42ed3db62e63d81be2ec183581eebfc2db" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:35:59.840716 systemd[1]: Started cri-containerd-a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb.scope - libcontainer container a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb. Dec 16 12:35:59.850160 kubelet[2281]: E1216 12:35:59.850122 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:35:59.863808 systemd[1]: Started cri-containerd-c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0.scope - libcontainer container c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0. Dec 16 12:35:59.869027 systemd[1]: Started cri-containerd-23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45.scope - libcontainer container 23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45. Dec 16 12:35:59.908575 kubelet[2281]: E1216 12:35:59.907974 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:35:59.920383 containerd[1496]: time="2025-12-16T12:35:59.920285041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:76082254f8313679032988c2daec9cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb\"" Dec 16 12:35:59.923335 containerd[1496]: time="2025-12-16T12:35:59.923287921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0\"" Dec 16 12:35:59.926254 containerd[1496]: time="2025-12-16T12:35:59.926205081Z" level=info msg="CreateContainer within sandbox \"a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:35:59.926451 containerd[1496]: time="2025-12-16T12:35:59.926324201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45\"" Dec 16 12:35:59.928214 containerd[1496]: time="2025-12-16T12:35:59.928182041Z" level=info msg="CreateContainer within sandbox \"c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:35:59.932074 containerd[1496]: time="2025-12-16T12:35:59.931464201Z" level=info msg="CreateContainer within sandbox \"23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:35:59.937523 containerd[1496]: time="2025-12-16T12:35:59.937470001Z" level=info msg="Container 7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:35:59.939687 containerd[1496]: time="2025-12-16T12:35:59.939634601Z" level=info msg="Container 29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:35:59.947931 containerd[1496]: time="2025-12-16T12:35:59.947875361Z" level=info msg="Container a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:35:59.951033 containerd[1496]: time="2025-12-16T12:35:59.950972681Z" level=info msg="CreateContainer within sandbox \"a69f3fc9a90f9037d0a0b02a4cf018c7b8ddae11d5d43d32116e50fa626d59cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3\"" Dec 16 12:35:59.951978 containerd[1496]: time="2025-12-16T12:35:59.951903601Z" level=info msg="StartContainer for \"7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3\"" Dec 16 12:35:59.952496 containerd[1496]: time="2025-12-16T12:35:59.952441041Z" level=info msg="CreateContainer within sandbox \"c1d68809b7be3a18f42ecc84b83e89f008836efb4b4900c5ad14be3b751d15b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9\"" Dec 16 12:35:59.953039 containerd[1496]: time="2025-12-16T12:35:59.952986361Z" level=info msg="StartContainer for \"29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9\"" Dec 16 12:35:59.954627 containerd[1496]: time="2025-12-16T12:35:59.954596921Z" level=info msg="connecting to shim 29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9" address="unix:///run/containerd/s/f06fd721b22327c99aedc565af54e5b4360b67dc7b041c28428117df1a3fb8d1" protocol=ttrpc version=3 Dec 16 12:35:59.955302 containerd[1496]: time="2025-12-16T12:35:59.955268281Z" level=info msg="connecting to shim 7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3" address="unix:///run/containerd/s/4da01265593aab4f562a8eb204751a76d2f8aafea0d8d44f8449b1cb6e1422e2" protocol=ttrpc version=3 Dec 16 12:35:59.958159 containerd[1496]: time="2025-12-16T12:35:59.958121721Z" level=info msg="CreateContainer within sandbox \"23ac1d62fea9366fc37cb5a7d76bc019db23b3a2e729921ee0e1e7712faa6b45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a\"" Dec 16 12:35:59.958809 containerd[1496]: time="2025-12-16T12:35:59.958782321Z" level=info msg="StartContainer for \"a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a\"" Dec 16 12:35:59.962214 containerd[1496]: time="2025-12-16T12:35:59.962156041Z" level=info msg="connecting to shim a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a" address="unix:///run/containerd/s/08f38b8dc16fcc889a2f185b46b75e42ed3db62e63d81be2ec183581eebfc2db" protocol=ttrpc version=3 Dec 16 12:35:59.975665 systemd[1]: Started cri-containerd-7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3.scope - libcontainer container 7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3. Dec 16 12:35:59.981159 systemd[1]: Started cri-containerd-29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9.scope - libcontainer container 29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9. Dec 16 12:35:59.982700 systemd[1]: Started cri-containerd-a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a.scope - libcontainer container a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a. Dec 16 12:36:00.010621 kubelet[2281]: E1216 12:36:00.010114 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:36:00.026743 kubelet[2281]: E1216 12:36:00.026686 2281 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:36:00.032696 containerd[1496]: time="2025-12-16T12:36:00.032646161Z" level=info msg="StartContainer for \"7f0577453867329a0ff0ab158768cf8419b97662e4f95cb64733fec16ed6aaa3\" returns successfully" Dec 16 12:36:00.047817 containerd[1496]: time="2025-12-16T12:36:00.047774001Z" level=info msg="StartContainer for \"a5ee8de1df4179dd97216fa0167d021cd52d16e6ee20806e0d065d0a102a305a\" returns successfully" Dec 16 12:36:00.049065 containerd[1496]: time="2025-12-16T12:36:00.049035441Z" level=info msg="StartContainer for \"29cd36e6e68c34ee55138620526319a688b1925e8c99453a06568a84694cc1f9\" returns successfully" Dec 16 12:36:00.408013 kubelet[2281]: I1216 12:36:00.407978 2281 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:36:00.865223 kubelet[2281]: E1216 12:36:00.865062 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:36:00.865810 kubelet[2281]: E1216 12:36:00.865787 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:36:00.868719 kubelet[2281]: E1216 12:36:00.868686 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:36:01.220386 kubelet[2281]: E1216 12:36:01.220037 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:36:01.256944 kubelet[2281]: I1216 12:36:01.256893 2281 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:36:01.256944 kubelet[2281]: E1216 12:36:01.256933 2281 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:36:01.292815 kubelet[2281]: E1216 12:36:01.292773 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.393580 kubelet[2281]: E1216 12:36:01.393526 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.494399 kubelet[2281]: E1216 12:36:01.494282 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.594942 kubelet[2281]: E1216 12:36:01.594890 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.695420 kubelet[2281]: E1216 12:36:01.695374 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.795857 kubelet[2281]: E1216 12:36:01.795798 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.869833 kubelet[2281]: E1216 12:36:01.869722 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:36:01.869966 kubelet[2281]: E1216 12:36:01.869894 2281 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:36:01.896493 kubelet[2281]: E1216 12:36:01.896428 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:01.997721 kubelet[2281]: E1216 12:36:01.997679 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:02.098934 kubelet[2281]: E1216 12:36:02.098783 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:02.199103 kubelet[2281]: E1216 12:36:02.199054 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:02.299807 kubelet[2281]: E1216 12:36:02.299768 2281 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:02.433359 kubelet[2281]: I1216 12:36:02.433246 2281 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:02.443581 kubelet[2281]: I1216 12:36:02.443475 2281 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:02.448811 kubelet[2281]: I1216 12:36:02.448765 2281 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:02.825513 kubelet[2281]: I1216 12:36:02.825452 2281 apiserver.go:52] "Watching apiserver" Dec 16 12:36:02.832558 kubelet[2281]: I1216 12:36:02.832491 2281 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:36:03.861754 systemd[1]: Reload requested from client PID 2571 ('systemctl') (unit session-7.scope)... Dec 16 12:36:03.861775 systemd[1]: Reloading... Dec 16 12:36:03.935584 zram_generator::config[2614]: No configuration found. Dec 16 12:36:04.122887 systemd[1]: Reloading finished in 260 ms. Dec 16 12:36:04.155173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:36:04.174698 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:36:04.174967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:36:04.175026 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 124.2M memory peak. Dec 16 12:36:04.178770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:36:04.348793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:36:04.373970 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:36:04.415048 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:36:04.415048 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:36:04.415385 kubelet[2656]: I1216 12:36:04.415102 2656 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:36:04.421155 kubelet[2656]: I1216 12:36:04.421108 2656 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:36:04.421155 kubelet[2656]: I1216 12:36:04.421137 2656 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:36:04.421155 kubelet[2656]: I1216 12:36:04.421167 2656 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:36:04.421328 kubelet[2656]: I1216 12:36:04.421173 2656 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:36:04.421608 kubelet[2656]: I1216 12:36:04.421586 2656 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:36:04.423101 kubelet[2656]: I1216 12:36:04.423083 2656 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:36:04.425299 kubelet[2656]: I1216 12:36:04.425259 2656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:36:04.433599 kubelet[2656]: I1216 12:36:04.430583 2656 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:36:04.436643 kubelet[2656]: I1216 12:36:04.436607 2656 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:36:04.436859 kubelet[2656]: I1216 12:36:04.436827 2656 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:36:04.437006 kubelet[2656]: I1216 12:36:04.436861 2656 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:36:04.437093 kubelet[2656]: I1216 12:36:04.437008 2656 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:36:04.437093 kubelet[2656]: I1216 12:36:04.437016 2656 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:36:04.437093 kubelet[2656]: I1216 12:36:04.437038 2656 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:36:04.437893 kubelet[2656]: I1216 12:36:04.437871 2656 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:36:04.438035 kubelet[2656]: I1216 12:36:04.438023 2656 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:36:04.438065 kubelet[2656]: I1216 12:36:04.438041 2656 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:36:04.438065 kubelet[2656]: I1216 12:36:04.438065 2656 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:36:04.438611 kubelet[2656]: I1216 12:36:04.438077 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:36:04.439119 kubelet[2656]: I1216 12:36:04.438963 2656 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:36:04.440045 kubelet[2656]: I1216 12:36:04.440010 2656 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:36:04.440116 kubelet[2656]: I1216 12:36:04.440048 2656 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:36:04.442984 kubelet[2656]: I1216 12:36:04.442964 2656 server.go:1262] "Started kubelet" Dec 16 12:36:04.443816 kubelet[2656]: I1216 12:36:04.443793 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:36:04.443816 kubelet[2656]: I1216 12:36:04.443778 2656 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:36:04.443914 kubelet[2656]: I1216 12:36:04.443841 2656 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:36:04.444071 kubelet[2656]: I1216 12:36:04.444041 2656 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:36:04.444125 kubelet[2656]: I1216 12:36:04.444106 2656 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:36:04.450566 kubelet[2656]: I1216 12:36:04.448610 2656 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:36:04.450566 kubelet[2656]: E1216 12:36:04.450455 2656 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:36:04.450566 kubelet[2656]: I1216 12:36:04.450485 2656 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:36:04.450729 kubelet[2656]: I1216 12:36:04.450656 2656 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:36:04.450852 kubelet[2656]: I1216 12:36:04.450797 2656 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:36:04.453556 kubelet[2656]: I1216 12:36:04.452017 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:36:04.456053 kubelet[2656]: E1216 12:36:04.456003 2656 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:36:04.467647 kubelet[2656]: I1216 12:36:04.467545 2656 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:36:04.468593 kubelet[2656]: I1216 12:36:04.467794 2656 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:36:04.468593 kubelet[2656]: I1216 12:36:04.467893 2656 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:36:04.474649 kubelet[2656]: I1216 12:36:04.474606 2656 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:36:04.476071 kubelet[2656]: I1216 12:36:04.476034 2656 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:36:04.476071 kubelet[2656]: I1216 12:36:04.476060 2656 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:36:04.476167 kubelet[2656]: I1216 12:36:04.476081 2656 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:36:04.476167 kubelet[2656]: E1216 12:36:04.476119 2656 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:36:04.507676 kubelet[2656]: I1216 12:36:04.507649 2656 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:36:04.507676 kubelet[2656]: I1216 12:36:04.507668 2656 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:36:04.507823 kubelet[2656]: I1216 12:36:04.507688 2656 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:36:04.507823 kubelet[2656]: I1216 12:36:04.507814 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:36:04.507870 kubelet[2656]: I1216 12:36:04.507828 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:36:04.508634 kubelet[2656]: I1216 12:36:04.508601 2656 policy_none.go:49] "None policy: Start" Dec 16 12:36:04.508676 kubelet[2656]: I1216 12:36:04.508643 2656 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:36:04.508676 kubelet[2656]: I1216 12:36:04.508659 2656 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:36:04.508797 kubelet[2656]: I1216 12:36:04.508783 2656 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:36:04.508797 kubelet[2656]: I1216 12:36:04.508796 2656 policy_none.go:47] "Start" Dec 16 12:36:04.512890 kubelet[2656]: E1216 12:36:04.512782 2656 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:36:04.512989 kubelet[2656]: I1216 12:36:04.512934 2656 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:36:04.512989 kubelet[2656]: I1216 12:36:04.512947 2656 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:36:04.513162 kubelet[2656]: I1216 12:36:04.513135 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:36:04.514020 kubelet[2656]: E1216 12:36:04.513927 2656 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:36:04.576802 kubelet[2656]: I1216 12:36:04.576769 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:04.576906 kubelet[2656]: I1216 12:36:04.576833 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.576966 kubelet[2656]: I1216 12:36:04.576929 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:04.582394 kubelet[2656]: E1216 12:36:04.582353 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:04.583308 kubelet[2656]: E1216 12:36:04.583276 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:04.583308 kubelet[2656]: E1216 12:36:04.583294 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.614771 kubelet[2656]: I1216 12:36:04.614731 2656 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:36:04.623471 kubelet[2656]: I1216 12:36:04.623166 2656 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:36:04.623471 kubelet[2656]: I1216 12:36:04.623266 2656 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:36:04.752423 kubelet[2656]: I1216 12:36:04.752303 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:04.752423 kubelet[2656]: I1216 12:36:04.752351 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.752571 kubelet[2656]: I1216 12:36:04.752434 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.752571 kubelet[2656]: I1216 12:36:04.752486 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.752571 kubelet[2656]: I1216 12:36:04.752522 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.752571 kubelet[2656]: I1216 12:36:04.752543 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:04.752666 kubelet[2656]: I1216 12:36:04.752589 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:04.752666 kubelet[2656]: I1216 12:36:04.752606 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:04.752666 kubelet[2656]: I1216 12:36:04.752621 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76082254f8313679032988c2daec9cd9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"76082254f8313679032988c2daec9cd9\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:05.439306 kubelet[2656]: I1216 12:36:05.439164 2656 apiserver.go:52] "Watching apiserver" Dec 16 12:36:05.450817 kubelet[2656]: I1216 12:36:05.450785 2656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:36:05.492543 kubelet[2656]: I1216 12:36:05.492499 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:05.492695 kubelet[2656]: I1216 12:36:05.492619 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:05.492880 kubelet[2656]: I1216 12:36:05.492858 2656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:05.501808 kubelet[2656]: E1216 12:36:05.501767 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:36:05.502655 kubelet[2656]: E1216 12:36:05.502615 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:36:05.503367 kubelet[2656]: E1216 12:36:05.503342 2656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:36:05.516044 kubelet[2656]: I1216 12:36:05.515981 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.515935921 podStartE2EDuration="3.515935921s" podCreationTimestamp="2025-12-16 12:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:05.513277241 +0000 UTC m=+1.135345961" watchObservedRunningTime="2025-12-16 12:36:05.515935921 +0000 UTC m=+1.138004601" Dec 16 12:36:05.528400 kubelet[2656]: I1216 12:36:05.528334 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.528319881 podStartE2EDuration="3.528319881s" podCreationTimestamp="2025-12-16 12:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:05.527983961 +0000 UTC m=+1.150052681" watchObservedRunningTime="2025-12-16 12:36:05.528319881 +0000 UTC m=+1.150388641" Dec 16 12:36:05.553902 kubelet[2656]: I1216 12:36:05.553834 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.553819121 podStartE2EDuration="3.553819121s" podCreationTimestamp="2025-12-16 12:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:05.540140761 +0000 UTC m=+1.162209481" watchObservedRunningTime="2025-12-16 12:36:05.553819121 +0000 UTC m=+1.175887841" Dec 16 12:36:09.349640 kubelet[2656]: I1216 12:36:09.349605 2656 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:36:09.350386 containerd[1496]: time="2025-12-16T12:36:09.350249805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:36:09.350745 kubelet[2656]: I1216 12:36:09.350455 2656 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:36:10.216222 systemd[1]: Created slice kubepods-besteffort-poda52341a7_d471_4d24_a8b6_22076ae6f07d.slice - libcontainer container kubepods-besteffort-poda52341a7_d471_4d24_a8b6_22076ae6f07d.slice. Dec 16 12:36:10.289599 kubelet[2656]: I1216 12:36:10.289433 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a52341a7-d471-4d24-a8b6-22076ae6f07d-xtables-lock\") pod \"kube-proxy-b7qj2\" (UID: \"a52341a7-d471-4d24-a8b6-22076ae6f07d\") " pod="kube-system/kube-proxy-b7qj2" Dec 16 12:36:10.289599 kubelet[2656]: I1216 12:36:10.289478 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db6bx\" (UniqueName: \"kubernetes.io/projected/a52341a7-d471-4d24-a8b6-22076ae6f07d-kube-api-access-db6bx\") pod \"kube-proxy-b7qj2\" (UID: \"a52341a7-d471-4d24-a8b6-22076ae6f07d\") " pod="kube-system/kube-proxy-b7qj2" Dec 16 12:36:10.289599 kubelet[2656]: I1216 12:36:10.289514 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a52341a7-d471-4d24-a8b6-22076ae6f07d-kube-proxy\") pod \"kube-proxy-b7qj2\" (UID: \"a52341a7-d471-4d24-a8b6-22076ae6f07d\") " pod="kube-system/kube-proxy-b7qj2" Dec 16 12:36:10.289599 kubelet[2656]: I1216 12:36:10.289530 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a52341a7-d471-4d24-a8b6-22076ae6f07d-lib-modules\") pod \"kube-proxy-b7qj2\" (UID: \"a52341a7-d471-4d24-a8b6-22076ae6f07d\") " pod="kube-system/kube-proxy-b7qj2" Dec 16 12:36:10.541192 containerd[1496]: time="2025-12-16T12:36:10.540805988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7qj2,Uid:a52341a7-d471-4d24-a8b6-22076ae6f07d,Namespace:kube-system,Attempt:0,}" Dec 16 12:36:10.574585 containerd[1496]: time="2025-12-16T12:36:10.573645667Z" level=info msg="connecting to shim 084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07" address="unix:///run/containerd/s/1c8dc2feb00cfdbfc5d57a824c6757b363f3c48417bfb4a82b8f8299e86cd2f1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:10.591325 kubelet[2656]: I1216 12:36:10.591275 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f23e467-3bef-42b3-8366-67263c861a69-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-g6t9j\" (UID: \"0f23e467-3bef-42b3-8366-67263c861a69\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-g6t9j" Dec 16 12:36:10.591925 kubelet[2656]: I1216 12:36:10.591706 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vqs\" (UniqueName: \"kubernetes.io/projected/0f23e467-3bef-42b3-8366-67263c861a69-kube-api-access-b4vqs\") pod \"tigera-operator-65cdcdfd6d-g6t9j\" (UID: \"0f23e467-3bef-42b3-8366-67263c861a69\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-g6t9j" Dec 16 12:36:10.594899 systemd[1]: Created slice kubepods-besteffort-pod0f23e467_3bef_42b3_8366_67263c861a69.slice - libcontainer container kubepods-besteffort-pod0f23e467_3bef_42b3_8366_67263c861a69.slice. Dec 16 12:36:10.622780 systemd[1]: Started cri-containerd-084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07.scope - libcontainer container 084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07. Dec 16 12:36:10.646998 containerd[1496]: time="2025-12-16T12:36:10.646958194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7qj2,Uid:a52341a7-d471-4d24-a8b6-22076ae6f07d,Namespace:kube-system,Attempt:0,} returns sandbox id \"084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07\"" Dec 16 12:36:10.652401 containerd[1496]: time="2025-12-16T12:36:10.652351080Z" level=info msg="CreateContainer within sandbox \"084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:36:10.665236 containerd[1496]: time="2025-12-16T12:36:10.664138494Z" level=info msg="Container 345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:10.674507 containerd[1496]: time="2025-12-16T12:36:10.674462746Z" level=info msg="CreateContainer within sandbox \"084ca793396e3f7e22008d25a9a12f4a7096195e9942c106ea125a9f4acfef07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134\"" Dec 16 12:36:10.675391 containerd[1496]: time="2025-12-16T12:36:10.675276547Z" level=info msg="StartContainer for \"345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134\"" Dec 16 12:36:10.677271 containerd[1496]: time="2025-12-16T12:36:10.677240950Z" level=info msg="connecting to shim 345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134" address="unix:///run/containerd/s/1c8dc2feb00cfdbfc5d57a824c6757b363f3c48417bfb4a82b8f8299e86cd2f1" protocol=ttrpc version=3 Dec 16 12:36:10.702801 systemd[1]: Started cri-containerd-345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134.scope - libcontainer container 345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134. Dec 16 12:36:10.800615 containerd[1496]: time="2025-12-16T12:36:10.800270695Z" level=info msg="StartContainer for \"345f4438798d187d054f52b51b402e77dc932693ec1520b4025a0fb7b4f1e134\" returns successfully" Dec 16 12:36:10.903073 containerd[1496]: time="2025-12-16T12:36:10.903030857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-g6t9j,Uid:0f23e467-3bef-42b3-8366-67263c861a69,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:36:10.923063 containerd[1496]: time="2025-12-16T12:36:10.923017841Z" level=info msg="connecting to shim 4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba" address="unix:///run/containerd/s/730f08a093440718753a7901d9594ca4ba4f6a2909e31b72760cefe57934d835" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:10.954782 systemd[1]: Started cri-containerd-4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba.scope - libcontainer container 4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba. Dec 16 12:36:10.985855 containerd[1496]: time="2025-12-16T12:36:10.985750275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-g6t9j,Uid:0f23e467-3bef-42b3-8366-67263c861a69,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba\"" Dec 16 12:36:10.987658 containerd[1496]: time="2025-12-16T12:36:10.987462557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:36:12.473038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834696254.mount: Deactivated successfully. Dec 16 12:36:12.770772 containerd[1496]: time="2025-12-16T12:36:12.770630167Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:12.772158 containerd[1496]: time="2025-12-16T12:36:12.772100008Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:36:12.776476 containerd[1496]: time="2025-12-16T12:36:12.776430413Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:12.778966 containerd[1496]: time="2025-12-16T12:36:12.778919776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:12.780187 containerd[1496]: time="2025-12-16T12:36:12.779782976Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.791989218s" Dec 16 12:36:12.780187 containerd[1496]: time="2025-12-16T12:36:12.779823257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:36:12.788794 containerd[1496]: time="2025-12-16T12:36:12.788749306Z" level=info msg="CreateContainer within sandbox \"4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:36:12.797601 containerd[1496]: time="2025-12-16T12:36:12.797086115Z" level=info msg="Container da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:12.799466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593838418.mount: Deactivated successfully. Dec 16 12:36:12.805476 containerd[1496]: time="2025-12-16T12:36:12.805432603Z" level=info msg="CreateContainer within sandbox \"4c8260430548072c40307f09bf7bef69fc7f25f4feadac9701d49d064e0431ba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd\"" Dec 16 12:36:12.806949 containerd[1496]: time="2025-12-16T12:36:12.806718165Z" level=info msg="StartContainer for \"da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd\"" Dec 16 12:36:12.807688 containerd[1496]: time="2025-12-16T12:36:12.807651606Z" level=info msg="connecting to shim da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd" address="unix:///run/containerd/s/730f08a093440718753a7901d9594ca4ba4f6a2909e31b72760cefe57934d835" protocol=ttrpc version=3 Dec 16 12:36:12.839805 systemd[1]: Started cri-containerd-da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd.scope - libcontainer container da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd. Dec 16 12:36:12.868058 containerd[1496]: time="2025-12-16T12:36:12.867962708Z" level=info msg="StartContainer for \"da3658b9a3f27b6b83b3554396660da3ed909f702de0d045e7592f1ab23a5cfd\" returns successfully" Dec 16 12:36:13.531728 kubelet[2656]: I1216 12:36:13.531672 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7qj2" podStartSLOduration=3.531656445 podStartE2EDuration="3.531656445s" podCreationTimestamp="2025-12-16 12:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:11.521666152 +0000 UTC m=+7.143734872" watchObservedRunningTime="2025-12-16 12:36:13.531656445 +0000 UTC m=+9.153725165" Dec 16 12:36:18.233467 sudo[1707]: pam_unix(sudo:session): session closed for user root Dec 16 12:36:18.236409 sshd[1706]: Connection closed by 10.0.0.1 port 49684 Dec 16 12:36:18.236997 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:18.242939 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:49684.service: Deactivated successfully. Dec 16 12:36:18.244737 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:36:18.244922 systemd[1]: session-7.scope: Consumed 6.834s CPU time, 222.9M memory peak. Dec 16 12:36:18.248859 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:36:18.251586 systemd-logind[1480]: Removed session 7. Dec 16 12:36:18.899667 update_engine[1483]: I20251216 12:36:18.899587 1483 update_attempter.cc:509] Updating boot flags... Dec 16 12:36:19.490445 kubelet[2656]: I1216 12:36:19.490376 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-g6t9j" podStartSLOduration=7.69675857 podStartE2EDuration="9.49036075s" podCreationTimestamp="2025-12-16 12:36:10 +0000 UTC" firstStartedPulling="2025-12-16 12:36:10.986974157 +0000 UTC m=+6.609042837" lastFinishedPulling="2025-12-16 12:36:12.780576337 +0000 UTC m=+8.402645017" observedRunningTime="2025-12-16 12:36:13.531926806 +0000 UTC m=+9.153995566" watchObservedRunningTime="2025-12-16 12:36:19.49036075 +0000 UTC m=+15.112429470" Dec 16 12:36:27.117243 systemd[1]: Created slice kubepods-besteffort-pod1ac52c96_0374_46a9_9042_aea86e1b0d01.slice - libcontainer container kubepods-besteffort-pod1ac52c96_0374_46a9_9042_aea86e1b0d01.slice. Dec 16 12:36:27.200510 kubelet[2656]: I1216 12:36:27.200467 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zk6\" (UniqueName: \"kubernetes.io/projected/1ac52c96-0374-46a9-9042-aea86e1b0d01-kube-api-access-v9zk6\") pod \"calico-typha-559f5d86ff-c42q2\" (UID: \"1ac52c96-0374-46a9-9042-aea86e1b0d01\") " pod="calico-system/calico-typha-559f5d86ff-c42q2" Dec 16 12:36:27.200916 kubelet[2656]: I1216 12:36:27.200516 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ac52c96-0374-46a9-9042-aea86e1b0d01-tigera-ca-bundle\") pod \"calico-typha-559f5d86ff-c42q2\" (UID: \"1ac52c96-0374-46a9-9042-aea86e1b0d01\") " pod="calico-system/calico-typha-559f5d86ff-c42q2" Dec 16 12:36:27.200916 kubelet[2656]: I1216 12:36:27.200600 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1ac52c96-0374-46a9-9042-aea86e1b0d01-typha-certs\") pod \"calico-typha-559f5d86ff-c42q2\" (UID: \"1ac52c96-0374-46a9-9042-aea86e1b0d01\") " pod="calico-system/calico-typha-559f5d86ff-c42q2" Dec 16 12:36:27.288561 systemd[1]: Created slice kubepods-besteffort-podb090834f_18b7_44a5_84d8_964b360e87af.slice - libcontainer container kubepods-besteffort-podb090834f_18b7_44a5_84d8_964b360e87af.slice. Dec 16 12:36:27.300911 kubelet[2656]: I1216 12:36:27.300871 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-cni-net-dir\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301069 kubelet[2656]: I1216 12:36:27.300929 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-flexvol-driver-host\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301069 kubelet[2656]: I1216 12:36:27.300950 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmr6x\" (UniqueName: \"kubernetes.io/projected/b090834f-18b7-44a5-84d8-964b360e87af-kube-api-access-bmr6x\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301069 kubelet[2656]: I1216 12:36:27.300966 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-lib-modules\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301069 kubelet[2656]: I1216 12:36:27.300991 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-policysync\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301069 kubelet[2656]: I1216 12:36:27.301006 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-var-lib-calico\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301193 kubelet[2656]: I1216 12:36:27.301081 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-cni-bin-dir\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301193 kubelet[2656]: I1216 12:36:27.301102 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b090834f-18b7-44a5-84d8-964b360e87af-node-certs\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.301193 kubelet[2656]: I1216 12:36:27.301117 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-xtables-lock\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.302011 kubelet[2656]: I1216 12:36:27.301723 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b090834f-18b7-44a5-84d8-964b360e87af-tigera-ca-bundle\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.302011 kubelet[2656]: I1216 12:36:27.301776 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-cni-log-dir\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.302011 kubelet[2656]: I1216 12:36:27.301790 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b090834f-18b7-44a5-84d8-964b360e87af-var-run-calico\") pod \"calico-node-bkdzq\" (UID: \"b090834f-18b7-44a5-84d8-964b360e87af\") " pod="calico-system/calico-node-bkdzq" Dec 16 12:36:27.408319 kubelet[2656]: E1216 12:36:27.408212 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.408319 kubelet[2656]: W1216 12:36:27.408240 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.408319 kubelet[2656]: E1216 12:36:27.408272 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.414986 kubelet[2656]: E1216 12:36:27.414898 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.414986 kubelet[2656]: W1216 12:36:27.414923 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.414986 kubelet[2656]: E1216 12:36:27.414944 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.424997 containerd[1496]: time="2025-12-16T12:36:27.424627233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559f5d86ff-c42q2,Uid:1ac52c96-0374-46a9-9042-aea86e1b0d01,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:27.479308 kubelet[2656]: E1216 12:36:27.479246 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:27.489724 kubelet[2656]: E1216 12:36:27.489082 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.489724 kubelet[2656]: W1216 12:36:27.489113 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.489724 kubelet[2656]: E1216 12:36:27.489150 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.492582 kubelet[2656]: E1216 12:36:27.492475 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.492582 kubelet[2656]: W1216 12:36:27.492520 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.492745 kubelet[2656]: E1216 12:36:27.492592 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.493566 kubelet[2656]: E1216 12:36:27.493521 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.493566 kubelet[2656]: W1216 12:36:27.493555 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.493719 kubelet[2656]: E1216 12:36:27.493577 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.494490 kubelet[2656]: E1216 12:36:27.494463 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.494533 kubelet[2656]: W1216 12:36:27.494493 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.494533 kubelet[2656]: E1216 12:36:27.494512 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.495411 kubelet[2656]: E1216 12:36:27.495393 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.495411 kubelet[2656]: W1216 12:36:27.495411 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.495485 kubelet[2656]: E1216 12:36:27.495426 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.495683 kubelet[2656]: E1216 12:36:27.495668 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.495683 kubelet[2656]: W1216 12:36:27.495682 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.495759 kubelet[2656]: E1216 12:36:27.495693 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.495911 kubelet[2656]: E1216 12:36:27.495883 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.495948 kubelet[2656]: W1216 12:36:27.495913 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.495948 kubelet[2656]: E1216 12:36:27.495926 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.496126 kubelet[2656]: E1216 12:36:27.496114 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.496168 kubelet[2656]: W1216 12:36:27.496127 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.496168 kubelet[2656]: E1216 12:36:27.496137 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.496375 kubelet[2656]: E1216 12:36:27.496363 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.496410 kubelet[2656]: W1216 12:36:27.496375 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.496410 kubelet[2656]: E1216 12:36:27.496387 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.496614 kubelet[2656]: E1216 12:36:27.496602 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.496654 kubelet[2656]: W1216 12:36:27.496614 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.496654 kubelet[2656]: E1216 12:36:27.496624 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.496820 kubelet[2656]: E1216 12:36:27.496807 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.496855 kubelet[2656]: W1216 12:36:27.496820 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.496855 kubelet[2656]: E1216 12:36:27.496832 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.497000 kubelet[2656]: E1216 12:36:27.496987 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.497000 kubelet[2656]: W1216 12:36:27.496999 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.497057 kubelet[2656]: E1216 12:36:27.497008 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.497194 kubelet[2656]: E1216 12:36:27.497183 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.497194 kubelet[2656]: W1216 12:36:27.497194 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.497254 kubelet[2656]: E1216 12:36:27.497203 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.497375 kubelet[2656]: E1216 12:36:27.497364 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.497375 kubelet[2656]: W1216 12:36:27.497375 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.497435 kubelet[2656]: E1216 12:36:27.497383 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.497641 kubelet[2656]: E1216 12:36:27.497621 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.497641 kubelet[2656]: W1216 12:36:27.497637 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.497714 kubelet[2656]: E1216 12:36:27.497648 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.497863 kubelet[2656]: E1216 12:36:27.497850 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.497899 kubelet[2656]: W1216 12:36:27.497866 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.497899 kubelet[2656]: E1216 12:36:27.497878 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.498181 kubelet[2656]: E1216 12:36:27.498161 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.498181 kubelet[2656]: W1216 12:36:27.498177 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.498246 kubelet[2656]: E1216 12:36:27.498188 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.498623 kubelet[2656]: E1216 12:36:27.498600 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.498623 kubelet[2656]: W1216 12:36:27.498617 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.498737 kubelet[2656]: E1216 12:36:27.498630 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.498912 kubelet[2656]: E1216 12:36:27.498898 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.498912 kubelet[2656]: W1216 12:36:27.498912 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.498985 kubelet[2656]: E1216 12:36:27.498922 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.500206 kubelet[2656]: E1216 12:36:27.500171 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.500206 kubelet[2656]: W1216 12:36:27.500192 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.500206 kubelet[2656]: E1216 12:36:27.500207 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.501970 containerd[1496]: time="2025-12-16T12:36:27.501929144Z" level=info msg="connecting to shim 703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86" address="unix:///run/containerd/s/931f194e2aae039bd2c0ad2134b1d34cf017d1307a5f37c83ede694f397e24a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:27.503014 kubelet[2656]: E1216 12:36:27.502868 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.503014 kubelet[2656]: W1216 12:36:27.502885 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.503014 kubelet[2656]: E1216 12:36:27.502899 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.503014 kubelet[2656]: I1216 12:36:27.502927 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1fff280a-2bf1-4f6b-8d2a-055392e26ba8-varrun\") pod \"csi-node-driver-v9rch\" (UID: \"1fff280a-2bf1-4f6b-8d2a-055392e26ba8\") " pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:27.503233 kubelet[2656]: E1216 12:36:27.503214 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.503297 kubelet[2656]: W1216 12:36:27.503284 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.503343 kubelet[2656]: E1216 12:36:27.503334 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.503416 kubelet[2656]: I1216 12:36:27.503402 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fff280a-2bf1-4f6b-8d2a-055392e26ba8-kubelet-dir\") pod \"csi-node-driver-v9rch\" (UID: \"1fff280a-2bf1-4f6b-8d2a-055392e26ba8\") " pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:27.503656 kubelet[2656]: E1216 12:36:27.503626 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.503656 kubelet[2656]: W1216 12:36:27.503646 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.503731 kubelet[2656]: E1216 12:36:27.503659 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.503838 kubelet[2656]: E1216 12:36:27.503812 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.503838 kubelet[2656]: W1216 12:36:27.503824 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.503838 kubelet[2656]: E1216 12:36:27.503833 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.504190 kubelet[2656]: E1216 12:36:27.504165 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.504190 kubelet[2656]: W1216 12:36:27.504183 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.504190 kubelet[2656]: E1216 12:36:27.504194 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.504408 kubelet[2656]: E1216 12:36:27.504386 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.504408 kubelet[2656]: W1216 12:36:27.504400 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.504408 kubelet[2656]: E1216 12:36:27.504410 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.505202 kubelet[2656]: E1216 12:36:27.504661 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.505202 kubelet[2656]: W1216 12:36:27.504676 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.505202 kubelet[2656]: E1216 12:36:27.504687 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.505202 kubelet[2656]: I1216 12:36:27.504713 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1fff280a-2bf1-4f6b-8d2a-055392e26ba8-registration-dir\") pod \"csi-node-driver-v9rch\" (UID: \"1fff280a-2bf1-4f6b-8d2a-055392e26ba8\") " pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:27.505202 kubelet[2656]: E1216 12:36:27.504938 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.505202 kubelet[2656]: W1216 12:36:27.504951 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.505202 kubelet[2656]: E1216 12:36:27.504961 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.505202 kubelet[2656]: I1216 12:36:27.504978 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1fff280a-2bf1-4f6b-8d2a-055392e26ba8-socket-dir\") pod \"csi-node-driver-v9rch\" (UID: \"1fff280a-2bf1-4f6b-8d2a-055392e26ba8\") " pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:27.505202 kubelet[2656]: E1216 12:36:27.505179 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.505473 kubelet[2656]: W1216 12:36:27.505191 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.505473 kubelet[2656]: E1216 12:36:27.505203 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.505473 kubelet[2656]: I1216 12:36:27.505223 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwb4z\" (UniqueName: \"kubernetes.io/projected/1fff280a-2bf1-4f6b-8d2a-055392e26ba8-kube-api-access-vwb4z\") pod \"csi-node-driver-v9rch\" (UID: \"1fff280a-2bf1-4f6b-8d2a-055392e26ba8\") " pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:27.505473 kubelet[2656]: E1216 12:36:27.505467 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.505576 kubelet[2656]: W1216 12:36:27.505478 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.505576 kubelet[2656]: E1216 12:36:27.505518 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.506828 kubelet[2656]: E1216 12:36:27.506804 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.506908 kubelet[2656]: W1216 12:36:27.506828 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.506932 kubelet[2656]: E1216 12:36:27.506851 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.507215 kubelet[2656]: E1216 12:36:27.507201 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.507215 kubelet[2656]: W1216 12:36:27.507214 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.507278 kubelet[2656]: E1216 12:36:27.507238 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.507641 kubelet[2656]: E1216 12:36:27.507519 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.507641 kubelet[2656]: W1216 12:36:27.507531 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.507641 kubelet[2656]: E1216 12:36:27.507554 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.508561 kubelet[2656]: E1216 12:36:27.507821 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.508561 kubelet[2656]: W1216 12:36:27.507833 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.508561 kubelet[2656]: E1216 12:36:27.507846 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.508561 kubelet[2656]: E1216 12:36:27.508010 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.508561 kubelet[2656]: W1216 12:36:27.508019 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.508561 kubelet[2656]: E1216 12:36:27.508030 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.559463 systemd[1]: Started cri-containerd-703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86.scope - libcontainer container 703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86. Dec 16 12:36:27.607315 kubelet[2656]: E1216 12:36:27.607280 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.607315 kubelet[2656]: W1216 12:36:27.607325 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.607501 kubelet[2656]: E1216 12:36:27.607354 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.607699 kubelet[2656]: E1216 12:36:27.607680 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.607699 kubelet[2656]: W1216 12:36:27.607694 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.607766 kubelet[2656]: E1216 12:36:27.607705 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.607938 kubelet[2656]: E1216 12:36:27.607924 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.607970 kubelet[2656]: W1216 12:36:27.607949 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.607970 kubelet[2656]: E1216 12:36:27.607960 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.608310 kubelet[2656]: E1216 12:36:27.608278 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.608310 kubelet[2656]: W1216 12:36:27.608293 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.608310 kubelet[2656]: E1216 12:36:27.608303 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.609794 kubelet[2656]: E1216 12:36:27.609769 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.609794 kubelet[2656]: W1216 12:36:27.609791 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.609857 kubelet[2656]: E1216 12:36:27.609807 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.610161 kubelet[2656]: E1216 12:36:27.610126 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.610161 kubelet[2656]: W1216 12:36:27.610149 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.610161 kubelet[2656]: E1216 12:36:27.610161 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.610452 kubelet[2656]: E1216 12:36:27.610434 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.610452 kubelet[2656]: W1216 12:36:27.610449 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.610509 kubelet[2656]: E1216 12:36:27.610470 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.610739 kubelet[2656]: E1216 12:36:27.610722 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.610739 kubelet[2656]: W1216 12:36:27.610737 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.610798 kubelet[2656]: E1216 12:36:27.610747 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.610953 kubelet[2656]: E1216 12:36:27.610939 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.610953 kubelet[2656]: W1216 12:36:27.610951 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.611003 kubelet[2656]: E1216 12:36:27.610959 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.611222 kubelet[2656]: E1216 12:36:27.611208 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.611222 kubelet[2656]: W1216 12:36:27.611220 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.611274 kubelet[2656]: E1216 12:36:27.611230 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.611406 kubelet[2656]: E1216 12:36:27.611394 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.611475 kubelet[2656]: W1216 12:36:27.611412 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.611475 kubelet[2656]: E1216 12:36:27.611445 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.611880 kubelet[2656]: E1216 12:36:27.611858 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.611880 kubelet[2656]: W1216 12:36:27.611878 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.611942 kubelet[2656]: E1216 12:36:27.611890 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.612113 kubelet[2656]: E1216 12:36:27.612092 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.612155 kubelet[2656]: W1216 12:36:27.612120 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.612155 kubelet[2656]: E1216 12:36:27.612130 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.612196 containerd[1496]: time="2025-12-16T12:36:27.612127387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bkdzq,Uid:b090834f-18b7-44a5-84d8-964b360e87af,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:27.612338 kubelet[2656]: E1216 12:36:27.612324 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.612338 kubelet[2656]: W1216 12:36:27.612336 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.612384 kubelet[2656]: E1216 12:36:27.612344 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.612526 kubelet[2656]: E1216 12:36:27.612513 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.612526 kubelet[2656]: W1216 12:36:27.612524 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.612600 kubelet[2656]: E1216 12:36:27.612533 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.612736 kubelet[2656]: E1216 12:36:27.612722 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.612736 kubelet[2656]: W1216 12:36:27.612734 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.612784 kubelet[2656]: E1216 12:36:27.612743 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.612940 kubelet[2656]: E1216 12:36:27.612926 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.612940 kubelet[2656]: W1216 12:36:27.612937 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.612985 kubelet[2656]: E1216 12:36:27.612946 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.613350 kubelet[2656]: E1216 12:36:27.613333 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.613350 kubelet[2656]: W1216 12:36:27.613347 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.613402 kubelet[2656]: E1216 12:36:27.613358 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.613679 kubelet[2656]: E1216 12:36:27.613663 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.613679 kubelet[2656]: W1216 12:36:27.613677 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.613738 kubelet[2656]: E1216 12:36:27.613689 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.613901 kubelet[2656]: E1216 12:36:27.613888 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.613901 kubelet[2656]: W1216 12:36:27.613899 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.613946 kubelet[2656]: E1216 12:36:27.613908 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.614123 kubelet[2656]: E1216 12:36:27.614109 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.614123 kubelet[2656]: W1216 12:36:27.614122 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.614188 kubelet[2656]: E1216 12:36:27.614131 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.614317 kubelet[2656]: E1216 12:36:27.614304 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.614317 kubelet[2656]: W1216 12:36:27.614316 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.614363 kubelet[2656]: E1216 12:36:27.614324 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.614514 kubelet[2656]: E1216 12:36:27.614501 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.614514 kubelet[2656]: W1216 12:36:27.614514 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.614593 kubelet[2656]: E1216 12:36:27.614522 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.614738 kubelet[2656]: E1216 12:36:27.614725 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.614738 kubelet[2656]: W1216 12:36:27.614736 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.614788 kubelet[2656]: E1216 12:36:27.614745 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.615221 kubelet[2656]: E1216 12:36:27.615205 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.615251 kubelet[2656]: W1216 12:36:27.615219 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.615251 kubelet[2656]: E1216 12:36:27.615230 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:27.618619 containerd[1496]: time="2025-12-16T12:36:27.618486070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-559f5d86ff-c42q2,Uid:1ac52c96-0374-46a9-9042-aea86e1b0d01,Namespace:calico-system,Attempt:0,} returns sandbox id \"703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86\"" Dec 16 12:36:27.620084 containerd[1496]: time="2025-12-16T12:36:27.620058551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:36:27.759688 kubelet[2656]: E1216 12:36:27.757787 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:27.759688 kubelet[2656]: W1216 12:36:27.759579 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:27.759688 kubelet[2656]: E1216 12:36:27.759622 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:28.324309 containerd[1496]: time="2025-12-16T12:36:28.324219701Z" level=info msg="connecting to shim 9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78" address="unix:///run/containerd/s/ec085242e8b31cdc350a912a564defc264f2ca7adaa3036d708181badf465141" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:28.365772 systemd[1]: Started cri-containerd-9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78.scope - libcontainer container 9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78. Dec 16 12:36:28.437779 containerd[1496]: time="2025-12-16T12:36:28.437739263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bkdzq,Uid:b090834f-18b7-44a5-84d8-964b360e87af,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\"" Dec 16 12:36:29.049891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323701681.mount: Deactivated successfully. Dec 16 12:36:29.477356 kubelet[2656]: E1216 12:36:29.477107 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:29.947168 containerd[1496]: time="2025-12-16T12:36:29.947104161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:29.948306 containerd[1496]: time="2025-12-16T12:36:29.948115282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:36:29.949245 containerd[1496]: time="2025-12-16T12:36:29.949209362Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:29.951616 containerd[1496]: time="2025-12-16T12:36:29.951540443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:29.952805 containerd[1496]: time="2025-12-16T12:36:29.952770723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.332604092s" Dec 16 12:36:29.952863 containerd[1496]: time="2025-12-16T12:36:29.952811803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:36:29.954712 containerd[1496]: time="2025-12-16T12:36:29.954262804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:36:29.971506 containerd[1496]: time="2025-12-16T12:36:29.971459530Z" level=info msg="CreateContainer within sandbox \"703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:36:29.985801 containerd[1496]: time="2025-12-16T12:36:29.985746255Z" level=info msg="Container 57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:29.993439 containerd[1496]: time="2025-12-16T12:36:29.993391938Z" level=info msg="CreateContainer within sandbox \"703861a49f82289b693f98ee8694ae4408f6b83a4b2a4dafd0d0b5bff9dc7b86\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138\"" Dec 16 12:36:29.994056 containerd[1496]: time="2025-12-16T12:36:29.994030778Z" level=info msg="StartContainer for \"57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138\"" Dec 16 12:36:29.995459 containerd[1496]: time="2025-12-16T12:36:29.995406338Z" level=info msg="connecting to shim 57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138" address="unix:///run/containerd/s/931f194e2aae039bd2c0ad2134b1d34cf017d1307a5f37c83ede694f397e24a5" protocol=ttrpc version=3 Dec 16 12:36:30.035776 systemd[1]: Started cri-containerd-57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138.scope - libcontainer container 57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138. Dec 16 12:36:30.077770 containerd[1496]: time="2025-12-16T12:36:30.077732525Z" level=info msg="StartContainer for \"57aea85e39a40b4a69a891a487ce83dd01c3b673641cdbb24f32154cd2487138\" returns successfully" Dec 16 12:36:30.620436 kubelet[2656]: E1216 12:36:30.620404 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.620984 kubelet[2656]: W1216 12:36:30.620849 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.620984 kubelet[2656]: E1216 12:36:30.620883 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.621429 kubelet[2656]: E1216 12:36:30.621056 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.621429 kubelet[2656]: W1216 12:36:30.621065 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.621429 kubelet[2656]: E1216 12:36:30.621075 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.621429 kubelet[2656]: E1216 12:36:30.621229 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.621429 kubelet[2656]: W1216 12:36:30.621238 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.621429 kubelet[2656]: E1216 12:36:30.621247 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.621621 kubelet[2656]: E1216 12:36:30.621607 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.621681 kubelet[2656]: W1216 12:36:30.621670 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.621736 kubelet[2656]: E1216 12:36:30.621725 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.622037 kubelet[2656]: E1216 12:36:30.621938 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.622037 kubelet[2656]: W1216 12:36:30.621950 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.622037 kubelet[2656]: E1216 12:36:30.621959 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.622202 kubelet[2656]: E1216 12:36:30.622190 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.622262 kubelet[2656]: W1216 12:36:30.622247 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.622321 kubelet[2656]: E1216 12:36:30.622311 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.622620 kubelet[2656]: E1216 12:36:30.622500 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.622620 kubelet[2656]: W1216 12:36:30.622512 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.622620 kubelet[2656]: E1216 12:36:30.622531 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.622786 kubelet[2656]: E1216 12:36:30.622773 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.622861 kubelet[2656]: W1216 12:36:30.622848 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.622995 kubelet[2656]: E1216 12:36:30.622904 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.623085 kubelet[2656]: E1216 12:36:30.623074 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.623151 kubelet[2656]: W1216 12:36:30.623139 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.623223 kubelet[2656]: E1216 12:36:30.623210 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.623522 kubelet[2656]: E1216 12:36:30.623425 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.623522 kubelet[2656]: W1216 12:36:30.623437 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.623522 kubelet[2656]: E1216 12:36:30.623447 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.623696 kubelet[2656]: E1216 12:36:30.623683 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.623750 kubelet[2656]: W1216 12:36:30.623739 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.624181 kubelet[2656]: E1216 12:36:30.623794 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.624307 kubelet[2656]: E1216 12:36:30.624291 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.624364 kubelet[2656]: W1216 12:36:30.624352 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.624423 kubelet[2656]: E1216 12:36:30.624405 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.627481 kubelet[2656]: E1216 12:36:30.627452 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.627481 kubelet[2656]: W1216 12:36:30.627471 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.627481 kubelet[2656]: E1216 12:36:30.627485 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.627650 kubelet[2656]: E1216 12:36:30.627637 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.627650 kubelet[2656]: W1216 12:36:30.627647 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.627706 kubelet[2656]: E1216 12:36:30.627656 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.627787 kubelet[2656]: E1216 12:36:30.627777 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.627787 kubelet[2656]: W1216 12:36:30.627786 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.627832 kubelet[2656]: E1216 12:36:30.627794 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.640664 kubelet[2656]: E1216 12:36:30.640628 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.640664 kubelet[2656]: W1216 12:36:30.640653 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.640760 kubelet[2656]: E1216 12:36:30.640672 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.640918 kubelet[2656]: E1216 12:36:30.640889 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.640918 kubelet[2656]: W1216 12:36:30.640901 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.640918 kubelet[2656]: E1216 12:36:30.640910 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.641106 kubelet[2656]: E1216 12:36:30.641082 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.641106 kubelet[2656]: W1216 12:36:30.641093 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.641106 kubelet[2656]: E1216 12:36:30.641102 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.641333 kubelet[2656]: E1216 12:36:30.641307 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.641333 kubelet[2656]: W1216 12:36:30.641319 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.641333 kubelet[2656]: E1216 12:36:30.641328 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.641471 kubelet[2656]: E1216 12:36:30.641459 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.641471 kubelet[2656]: W1216 12:36:30.641469 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.641520 kubelet[2656]: E1216 12:36:30.641476 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.641619 kubelet[2656]: E1216 12:36:30.641608 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.641649 kubelet[2656]: W1216 12:36:30.641618 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.641649 kubelet[2656]: E1216 12:36:30.641627 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.641814 kubelet[2656]: E1216 12:36:30.641789 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.641814 kubelet[2656]: W1216 12:36:30.641801 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.641814 kubelet[2656]: E1216 12:36:30.641809 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.642162 kubelet[2656]: E1216 12:36:30.642121 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.642162 kubelet[2656]: W1216 12:36:30.642151 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.642217 kubelet[2656]: E1216 12:36:30.642164 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.642498 kubelet[2656]: E1216 12:36:30.642482 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.642524 kubelet[2656]: W1216 12:36:30.642498 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.642524 kubelet[2656]: E1216 12:36:30.642509 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.642754 kubelet[2656]: E1216 12:36:30.642741 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.642754 kubelet[2656]: W1216 12:36:30.642752 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.642815 kubelet[2656]: E1216 12:36:30.642761 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.643295 kubelet[2656]: E1216 12:36:30.643278 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.643295 kubelet[2656]: W1216 12:36:30.643292 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.643349 kubelet[2656]: E1216 12:36:30.643304 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.643459 kubelet[2656]: E1216 12:36:30.643447 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.643459 kubelet[2656]: W1216 12:36:30.643457 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.643506 kubelet[2656]: E1216 12:36:30.643465 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.646633 kubelet[2656]: E1216 12:36:30.646615 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.646633 kubelet[2656]: W1216 12:36:30.646630 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.646702 kubelet[2656]: E1216 12:36:30.646643 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.646956 kubelet[2656]: E1216 12:36:30.646920 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.646956 kubelet[2656]: W1216 12:36:30.646940 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.646956 kubelet[2656]: E1216 12:36:30.646953 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.647115 kubelet[2656]: E1216 12:36:30.647103 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.647115 kubelet[2656]: W1216 12:36:30.647113 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.647169 kubelet[2656]: E1216 12:36:30.647122 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.647308 kubelet[2656]: E1216 12:36:30.647296 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.647331 kubelet[2656]: W1216 12:36:30.647309 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.647331 kubelet[2656]: E1216 12:36:30.647318 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.647619 kubelet[2656]: E1216 12:36:30.647594 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.647619 kubelet[2656]: W1216 12:36:30.647607 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.647619 kubelet[2656]: E1216 12:36:30.647617 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:30.647777 kubelet[2656]: E1216 12:36:30.647765 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:36:30.647777 kubelet[2656]: W1216 12:36:30.647776 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:36:30.647824 kubelet[2656]: E1216 12:36:30.647785 2656 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:36:31.018150 containerd[1496]: time="2025-12-16T12:36:31.018019072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:31.019869 containerd[1496]: time="2025-12-16T12:36:31.019837512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:36:31.020727 containerd[1496]: time="2025-12-16T12:36:31.020699392Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:31.023210 containerd[1496]: time="2025-12-16T12:36:31.023175993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:31.023917 containerd[1496]: time="2025-12-16T12:36:31.023774873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.069474509s" Dec 16 12:36:31.023917 containerd[1496]: time="2025-12-16T12:36:31.023815873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:36:31.027789 containerd[1496]: time="2025-12-16T12:36:31.027759435Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:36:31.053914 containerd[1496]: time="2025-12-16T12:36:31.053858123Z" level=info msg="Container 14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:31.065403 containerd[1496]: time="2025-12-16T12:36:31.065339846Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9\"" Dec 16 12:36:31.065913 containerd[1496]: time="2025-12-16T12:36:31.065880206Z" level=info msg="StartContainer for \"14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9\"" Dec 16 12:36:31.067410 containerd[1496]: time="2025-12-16T12:36:31.067381767Z" level=info msg="connecting to shim 14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9" address="unix:///run/containerd/s/ec085242e8b31cdc350a912a564defc264f2ca7adaa3036d708181badf465141" protocol=ttrpc version=3 Dec 16 12:36:31.086830 systemd[1]: Started cri-containerd-14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9.scope - libcontainer container 14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9. Dec 16 12:36:31.165728 containerd[1496]: time="2025-12-16T12:36:31.165596517Z" level=info msg="StartContainer for \"14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9\" returns successfully" Dec 16 12:36:31.180188 systemd[1]: cri-containerd-14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9.scope: Deactivated successfully. Dec 16 12:36:31.226731 containerd[1496]: time="2025-12-16T12:36:31.226665295Z" level=info msg="received container exit event container_id:\"14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9\" id:\"14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9\" pid:3361 exited_at:{seconds:1765888591 nanos:219060733}" Dec 16 12:36:31.272494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14859ca95200ac9e8409729141745dbe1fd82d2fb203ff5684c155c7adadc3e9-rootfs.mount: Deactivated successfully. Dec 16 12:36:31.477016 kubelet[2656]: E1216 12:36:31.476949 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:31.577570 kubelet[2656]: I1216 12:36:31.577071 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:36:31.579393 containerd[1496]: time="2025-12-16T12:36:31.579346003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:36:31.612332 kubelet[2656]: I1216 12:36:31.610595 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-559f5d86ff-c42q2" podStartSLOduration=2.276735519 podStartE2EDuration="4.610577413s" podCreationTimestamp="2025-12-16 12:36:27 +0000 UTC" firstStartedPulling="2025-12-16 12:36:27.61981255 +0000 UTC m=+23.241881230" lastFinishedPulling="2025-12-16 12:36:29.953654404 +0000 UTC m=+25.575723124" observedRunningTime="2025-12-16 12:36:30.602106296 +0000 UTC m=+26.224175016" watchObservedRunningTime="2025-12-16 12:36:31.610577413 +0000 UTC m=+27.232646133" Dec 16 12:36:33.258309 containerd[1496]: time="2025-12-16T12:36:33.258254008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:33.259221 containerd[1496]: time="2025-12-16T12:36:33.258913208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:36:33.260033 containerd[1496]: time="2025-12-16T12:36:33.260002208Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:33.262498 containerd[1496]: time="2025-12-16T12:36:33.262460169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:33.263684 containerd[1496]: time="2025-12-16T12:36:33.263644969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.684255846s" Dec 16 12:36:33.263684 containerd[1496]: time="2025-12-16T12:36:33.263678249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:36:33.268561 containerd[1496]: time="2025-12-16T12:36:33.268512531Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:36:33.277326 containerd[1496]: time="2025-12-16T12:36:33.276188733Z" level=info msg="Container 332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:33.287374 containerd[1496]: time="2025-12-16T12:36:33.287318496Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e\"" Dec 16 12:36:33.288386 containerd[1496]: time="2025-12-16T12:36:33.288347816Z" level=info msg="StartContainer for \"332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e\"" Dec 16 12:36:33.290285 containerd[1496]: time="2025-12-16T12:36:33.290216176Z" level=info msg="connecting to shim 332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e" address="unix:///run/containerd/s/ec085242e8b31cdc350a912a564defc264f2ca7adaa3036d708181badf465141" protocol=ttrpc version=3 Dec 16 12:36:33.317787 systemd[1]: Started cri-containerd-332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e.scope - libcontainer container 332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e. Dec 16 12:36:33.397994 containerd[1496]: time="2025-12-16T12:36:33.397945605Z" level=info msg="StartContainer for \"332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e\" returns successfully" Dec 16 12:36:33.477183 kubelet[2656]: E1216 12:36:33.477116 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:34.231478 systemd[1]: cri-containerd-332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e.scope: Deactivated successfully. Dec 16 12:36:34.232291 systemd[1]: cri-containerd-332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e.scope: Consumed 498ms CPU time, 173.4M memory peak, 1.9M read from disk, 165.9M written to disk. Dec 16 12:36:34.233951 containerd[1496]: time="2025-12-16T12:36:34.233737026Z" level=info msg="received container exit event container_id:\"332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e\" id:\"332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e\" pid:3422 exited_at:{seconds:1765888594 nanos:233475386}" Dec 16 12:36:34.261862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-332a96e8d9bada30155d65486a11bae0c6e0eb7800c352c1eb99d28dd168d65e-rootfs.mount: Deactivated successfully. Dec 16 12:36:34.299986 kubelet[2656]: I1216 12:36:34.299958 2656 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:36:34.411865 systemd[1]: Created slice kubepods-burstable-pod9399c1e0_aa0f_4199_b50c_d1558534a15f.slice - libcontainer container kubepods-burstable-pod9399c1e0_aa0f_4199_b50c_d1558534a15f.slice. Dec 16 12:36:34.459869 systemd[1]: Created slice kubepods-besteffort-pod71fab9ca_3b60_4f2f_9864_f226dad6b716.slice - libcontainer container kubepods-besteffort-pod71fab9ca_3b60_4f2f_9864_f226dad6b716.slice. Dec 16 12:36:34.466289 kubelet[2656]: I1216 12:36:34.466231 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shn6s\" (UniqueName: \"kubernetes.io/projected/9399c1e0-aa0f-4199-b50c-d1558534a15f-kube-api-access-shn6s\") pod \"coredns-66bc5c9577-xrhkw\" (UID: \"9399c1e0-aa0f-4199-b50c-d1558534a15f\") " pod="kube-system/coredns-66bc5c9577-xrhkw" Dec 16 12:36:34.466289 kubelet[2656]: I1216 12:36:34.466268 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71fab9ca-3b60-4f2f-9864-f226dad6b716-calico-apiserver-certs\") pod \"calico-apiserver-66f874bbcf-h72xl\" (UID: \"71fab9ca-3b60-4f2f-9864-f226dad6b716\") " pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" Dec 16 12:36:34.466289 kubelet[2656]: I1216 12:36:34.466287 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9399c1e0-aa0f-4199-b50c-d1558534a15f-config-volume\") pod \"coredns-66bc5c9577-xrhkw\" (UID: \"9399c1e0-aa0f-4199-b50c-d1558534a15f\") " pod="kube-system/coredns-66bc5c9577-xrhkw" Dec 16 12:36:34.466517 kubelet[2656]: I1216 12:36:34.466306 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swqqd\" (UniqueName: \"kubernetes.io/projected/71fab9ca-3b60-4f2f-9864-f226dad6b716-kube-api-access-swqqd\") pod \"calico-apiserver-66f874bbcf-h72xl\" (UID: \"71fab9ca-3b60-4f2f-9864-f226dad6b716\") " pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" Dec 16 12:36:34.482713 systemd[1]: Created slice kubepods-besteffort-pod80e0532e_7ec6_441f_8cab_1a7e0256384d.slice - libcontainer container kubepods-besteffort-pod80e0532e_7ec6_441f_8cab_1a7e0256384d.slice. Dec 16 12:36:34.489492 systemd[1]: Created slice kubepods-burstable-podac18c2cb_4c77_47fc_8aba_04be84531916.slice - libcontainer container kubepods-burstable-podac18c2cb_4c77_47fc_8aba_04be84531916.slice. Dec 16 12:36:34.494426 systemd[1]: Created slice kubepods-besteffort-pod22a17076_a5c0_4472_8265_e6aeee78b179.slice - libcontainer container kubepods-besteffort-pod22a17076_a5c0_4472_8265_e6aeee78b179.slice. Dec 16 12:36:34.506533 systemd[1]: Created slice kubepods-besteffort-pod9c610c1e_d14d_4a7b_8ace_6bdd6870d6a2.slice - libcontainer container kubepods-besteffort-pod9c610c1e_d14d_4a7b_8ace_6bdd6870d6a2.slice. Dec 16 12:36:34.512906 systemd[1]: Created slice kubepods-besteffort-podffda2e75_a631_43f2_adbc_14ba2a0b562c.slice - libcontainer container kubepods-besteffort-podffda2e75_a631_43f2_adbc_14ba2a0b562c.slice. Dec 16 12:36:34.567080 kubelet[2656]: I1216 12:36:34.567034 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2-goldmane-key-pair\") pod \"goldmane-7c778bb748-pgg7z\" (UID: \"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2\") " pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.567894 kubelet[2656]: I1216 12:36:34.567753 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/22a17076-a5c0-4472-8265-e6aeee78b179-calico-apiserver-certs\") pod \"calico-apiserver-66f874bbcf-pr8p8\" (UID: \"22a17076-a5c0-4472-8265-e6aeee78b179\") " pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" Dec 16 12:36:34.567894 kubelet[2656]: I1216 12:36:34.567798 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwrg\" (UniqueName: \"kubernetes.io/projected/22a17076-a5c0-4472-8265-e6aeee78b179-kube-api-access-klwrg\") pod \"calico-apiserver-66f874bbcf-pr8p8\" (UID: \"22a17076-a5c0-4472-8265-e6aeee78b179\") " pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" Dec 16 12:36:34.567894 kubelet[2656]: I1216 12:36:34.567824 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj8kv\" (UniqueName: \"kubernetes.io/projected/ac18c2cb-4c77-47fc-8aba-04be84531916-kube-api-access-gj8kv\") pod \"coredns-66bc5c9577-rzf6q\" (UID: \"ac18c2cb-4c77-47fc-8aba-04be84531916\") " pod="kube-system/coredns-66bc5c9577-rzf6q" Dec 16 12:36:34.568017 kubelet[2656]: I1216 12:36:34.567842 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ntr\" (UniqueName: \"kubernetes.io/projected/ffda2e75-a631-43f2-adbc-14ba2a0b562c-kube-api-access-n6ntr\") pod \"whisker-5d45698cb4-kzn6n\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " pod="calico-system/whisker-5d45698cb4-kzn6n" Dec 16 12:36:34.568017 kubelet[2656]: I1216 12:36:34.567966 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-backend-key-pair\") pod \"whisker-5d45698cb4-kzn6n\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " pod="calico-system/whisker-5d45698cb4-kzn6n" Dec 16 12:36:34.568017 kubelet[2656]: I1216 12:36:34.567983 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-ca-bundle\") pod \"whisker-5d45698cb4-kzn6n\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " pod="calico-system/whisker-5d45698cb4-kzn6n" Dec 16 12:36:34.568017 kubelet[2656]: I1216 12:36:34.568002 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7685\" (UniqueName: \"kubernetes.io/projected/9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2-kube-api-access-j7685\") pod \"goldmane-7c778bb748-pgg7z\" (UID: \"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2\") " pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.568110 kubelet[2656]: I1216 12:36:34.568026 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k67k\" (UniqueName: \"kubernetes.io/projected/80e0532e-7ec6-441f-8cab-1a7e0256384d-kube-api-access-5k67k\") pod \"calico-kube-controllers-8489b7ddd5-dm4bl\" (UID: \"80e0532e-7ec6-441f-8cab-1a7e0256384d\") " pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" Dec 16 12:36:34.568110 kubelet[2656]: I1216 12:36:34.568070 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80e0532e-7ec6-441f-8cab-1a7e0256384d-tigera-ca-bundle\") pod \"calico-kube-controllers-8489b7ddd5-dm4bl\" (UID: \"80e0532e-7ec6-441f-8cab-1a7e0256384d\") " pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" Dec 16 12:36:34.568110 kubelet[2656]: I1216 12:36:34.568086 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2-config\") pod \"goldmane-7c778bb748-pgg7z\" (UID: \"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2\") " pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.568110 kubelet[2656]: I1216 12:36:34.568102 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac18c2cb-4c77-47fc-8aba-04be84531916-config-volume\") pod \"coredns-66bc5c9577-rzf6q\" (UID: \"ac18c2cb-4c77-47fc-8aba-04be84531916\") " pod="kube-system/coredns-66bc5c9577-rzf6q" Dec 16 12:36:34.568214 kubelet[2656]: I1216 12:36:34.568123 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pgg7z\" (UID: \"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2\") " pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.595182 containerd[1496]: time="2025-12-16T12:36:34.595109917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:36:34.719256 containerd[1496]: time="2025-12-16T12:36:34.719144148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrhkw,Uid:9399c1e0-aa0f-4199-b50c-d1558534a15f,Namespace:kube-system,Attempt:0,}" Dec 16 12:36:34.770353 containerd[1496]: time="2025-12-16T12:36:34.770226801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-h72xl,Uid:71fab9ca-3b60-4f2f-9864-f226dad6b716,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:36:34.789350 containerd[1496]: time="2025-12-16T12:36:34.789307526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8489b7ddd5-dm4bl,Uid:80e0532e-7ec6-441f-8cab-1a7e0256384d,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:34.797143 containerd[1496]: time="2025-12-16T12:36:34.797060768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzf6q,Uid:ac18c2cb-4c77-47fc-8aba-04be84531916,Namespace:kube-system,Attempt:0,}" Dec 16 12:36:34.803336 containerd[1496]: time="2025-12-16T12:36:34.803157890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-pr8p8,Uid:22a17076-a5c0-4472-8265-e6aeee78b179,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:36:34.813705 containerd[1496]: time="2025-12-16T12:36:34.813657972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pgg7z,Uid:9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:34.818539 containerd[1496]: time="2025-12-16T12:36:34.818456253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45698cb4-kzn6n,Uid:ffda2e75-a631-43f2-adbc-14ba2a0b562c,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:34.847004 containerd[1496]: time="2025-12-16T12:36:34.846936941Z" level=error msg="Failed to destroy network for sandbox \"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.852901 containerd[1496]: time="2025-12-16T12:36:34.852845382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrhkw,Uid:9399c1e0-aa0f-4199-b50c-d1558534a15f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.853671 kubelet[2656]: E1216 12:36:34.853089 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.853671 kubelet[2656]: E1216 12:36:34.853173 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xrhkw" Dec 16 12:36:34.853671 kubelet[2656]: E1216 12:36:34.853194 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xrhkw" Dec 16 12:36:34.853811 kubelet[2656]: E1216 12:36:34.853244 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xrhkw_kube-system(9399c1e0-aa0f-4199-b50c-d1558534a15f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xrhkw_kube-system(9399c1e0-aa0f-4199-b50c-d1558534a15f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ae66bc3e06072312960ef93ac7ab0ad6d03e543a6a0cbf67c6e57cb729d6621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xrhkw" podUID="9399c1e0-aa0f-4199-b50c-d1558534a15f" Dec 16 12:36:34.874932 containerd[1496]: time="2025-12-16T12:36:34.874884308Z" level=error msg="Failed to destroy network for sandbox \"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.876416 containerd[1496]: time="2025-12-16T12:36:34.876281268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-h72xl,Uid:71fab9ca-3b60-4f2f-9864-f226dad6b716,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.877138 kubelet[2656]: E1216 12:36:34.876818 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.877138 kubelet[2656]: E1216 12:36:34.876883 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" Dec 16 12:36:34.877138 kubelet[2656]: E1216 12:36:34.876903 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" Dec 16 12:36:34.877298 kubelet[2656]: E1216 12:36:34.876961 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66f874bbcf-h72xl_calico-apiserver(71fab9ca-3b60-4f2f-9864-f226dad6b716)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66f874bbcf-h72xl_calico-apiserver(71fab9ca-3b60-4f2f-9864-f226dad6b716)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1770712deab01e0303969229d57b820481ab47c2674a4bf9ce9719351a241c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" podUID="71fab9ca-3b60-4f2f-9864-f226dad6b716" Dec 16 12:36:34.895801 containerd[1496]: time="2025-12-16T12:36:34.895735273Z" level=error msg="Failed to destroy network for sandbox \"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.897225 containerd[1496]: time="2025-12-16T12:36:34.897108673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8489b7ddd5-dm4bl,Uid:80e0532e-7ec6-441f-8cab-1a7e0256384d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.897690 kubelet[2656]: E1216 12:36:34.897644 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.897850 kubelet[2656]: E1216 12:36:34.897787 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" Dec 16 12:36:34.897850 kubelet[2656]: E1216 12:36:34.897815 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" Dec 16 12:36:34.898426 kubelet[2656]: E1216 12:36:34.897956 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8489b7ddd5-dm4bl_calico-system(80e0532e-7ec6-441f-8cab-1a7e0256384d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8489b7ddd5-dm4bl_calico-system(80e0532e-7ec6-441f-8cab-1a7e0256384d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98312a330809e15ae519d88fdcd66e6e73e6de625daebcb6fb27657dd55dfc4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" podUID="80e0532e-7ec6-441f-8cab-1a7e0256384d" Dec 16 12:36:34.908093 containerd[1496]: time="2025-12-16T12:36:34.908000076Z" level=error msg="Failed to destroy network for sandbox \"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.909507 containerd[1496]: time="2025-12-16T12:36:34.909450996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzf6q,Uid:ac18c2cb-4c77-47fc-8aba-04be84531916,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.909869 kubelet[2656]: E1216 12:36:34.909782 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.909869 kubelet[2656]: E1216 12:36:34.909856 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rzf6q" Dec 16 12:36:34.909938 kubelet[2656]: E1216 12:36:34.909880 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rzf6q" Dec 16 12:36:34.909972 kubelet[2656]: E1216 12:36:34.909935 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rzf6q_kube-system(ac18c2cb-4c77-47fc-8aba-04be84531916)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rzf6q_kube-system(ac18c2cb-4c77-47fc-8aba-04be84531916)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"483550af26fb3877b62704c12eb7a6a4974fa35ca052363f4f4e0328a3c6033d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rzf6q" podUID="ac18c2cb-4c77-47fc-8aba-04be84531916" Dec 16 12:36:34.912340 containerd[1496]: time="2025-12-16T12:36:34.912277317Z" level=error msg="Failed to destroy network for sandbox \"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.914400 containerd[1496]: time="2025-12-16T12:36:34.914047598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-pr8p8,Uid:22a17076-a5c0-4472-8265-e6aeee78b179,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.914722 kubelet[2656]: E1216 12:36:34.914646 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.914722 kubelet[2656]: E1216 12:36:34.914697 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" Dec 16 12:36:34.914722 kubelet[2656]: E1216 12:36:34.914714 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" Dec 16 12:36:34.914995 kubelet[2656]: E1216 12:36:34.914767 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66f874bbcf-pr8p8_calico-apiserver(22a17076-a5c0-4472-8265-e6aeee78b179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66f874bbcf-pr8p8_calico-apiserver(22a17076-a5c0-4472-8265-e6aeee78b179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c060574dca095eab346f8ae1a9d3c839fc0e5a79b129ff2435bcddba81034f4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" podUID="22a17076-a5c0-4472-8265-e6aeee78b179" Dec 16 12:36:34.920571 containerd[1496]: time="2025-12-16T12:36:34.920421239Z" level=error msg="Failed to destroy network for sandbox \"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.921342 containerd[1496]: time="2025-12-16T12:36:34.921307479Z" level=error msg="Failed to destroy network for sandbox \"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.921908 containerd[1496]: time="2025-12-16T12:36:34.921865799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pgg7z,Uid:9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.922391 kubelet[2656]: E1216 12:36:34.922109 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.922391 kubelet[2656]: E1216 12:36:34.922191 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.922391 kubelet[2656]: E1216 12:36:34.922213 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pgg7z" Dec 16 12:36:34.922507 kubelet[2656]: E1216 12:36:34.922259 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pgg7z_calico-system(9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pgg7z_calico-system(9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fa47a9cabb679d49c29d6d4e8cb1e6ce1e78880496c2ca8058dd269a1f4fe28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2" Dec 16 12:36:34.923513 containerd[1496]: time="2025-12-16T12:36:34.923380760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45698cb4-kzn6n,Uid:ffda2e75-a631-43f2-adbc-14ba2a0b562c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.923769 kubelet[2656]: E1216 12:36:34.923721 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:34.923828 kubelet[2656]: E1216 12:36:34.923789 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45698cb4-kzn6n" Dec 16 12:36:34.923828 kubelet[2656]: E1216 12:36:34.923807 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d45698cb4-kzn6n" Dec 16 12:36:34.923917 kubelet[2656]: E1216 12:36:34.923886 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d45698cb4-kzn6n_calico-system(ffda2e75-a631-43f2-adbc-14ba2a0b562c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d45698cb4-kzn6n_calico-system(ffda2e75-a631-43f2-adbc-14ba2a0b562c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1f614e340444219efc868b18b7156b584442e232ae9fae137d225cfc49233b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d45698cb4-kzn6n" podUID="ffda2e75-a631-43f2-adbc-14ba2a0b562c" Dec 16 12:36:35.494739 systemd[1]: Created slice kubepods-besteffort-pod1fff280a_2bf1_4f6b_8d2a_055392e26ba8.slice - libcontainer container kubepods-besteffort-pod1fff280a_2bf1_4f6b_8d2a_055392e26ba8.slice. Dec 16 12:36:35.500005 containerd[1496]: time="2025-12-16T12:36:35.499957817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9rch,Uid:1fff280a-2bf1-4f6b-8d2a-055392e26ba8,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:35.598518 containerd[1496]: time="2025-12-16T12:36:35.598459041Z" level=error msg="Failed to destroy network for sandbox \"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:35.600660 systemd[1]: run-netns-cni\x2de326c9d7\x2d6ad7\x2df044\x2dadaf\x2d23848ab36021.mount: Deactivated successfully. Dec 16 12:36:35.610756 containerd[1496]: time="2025-12-16T12:36:35.610697563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9rch,Uid:1fff280a-2bf1-4f6b-8d2a-055392e26ba8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:35.610992 kubelet[2656]: E1216 12:36:35.610947 2656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:36:35.611281 kubelet[2656]: E1216 12:36:35.611012 2656 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:35.611281 kubelet[2656]: E1216 12:36:35.611031 2656 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9rch" Dec 16 12:36:35.611281 kubelet[2656]: E1216 12:36:35.611082 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36e27f2a2ea80b1ba9a1d243f5b5a12e822f9ea951e57276926d521fdcd88e52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:37.665139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260150330.mount: Deactivated successfully. Dec 16 12:36:37.980312 containerd[1496]: time="2025-12-16T12:36:37.980157080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:36:37.982823 containerd[1496]: time="2025-12-16T12:36:37.982690281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.387516764s" Dec 16 12:36:37.982823 containerd[1496]: time="2025-12-16T12:36:37.982733121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:36:37.993575 containerd[1496]: time="2025-12-16T12:36:37.992686043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:37.994308 containerd[1496]: time="2025-12-16T12:36:37.994263563Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:38.003944 containerd[1496]: time="2025-12-16T12:36:38.003885965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:36:38.008630 containerd[1496]: time="2025-12-16T12:36:38.008591806Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:36:38.017755 containerd[1496]: time="2025-12-16T12:36:38.017712568Z" level=info msg="Container 7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:38.020153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700159234.mount: Deactivated successfully. Dec 16 12:36:38.033121 containerd[1496]: time="2025-12-16T12:36:38.033054411Z" level=info msg="CreateContainer within sandbox \"9ff719fe5173ea86327da01f75a8f6bb3c98d343cb84a4bbc8570991dfbc2e78\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c\"" Dec 16 12:36:38.034953 containerd[1496]: time="2025-12-16T12:36:38.034903331Z" level=info msg="StartContainer for \"7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c\"" Dec 16 12:36:38.036628 containerd[1496]: time="2025-12-16T12:36:38.036597812Z" level=info msg="connecting to shim 7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c" address="unix:///run/containerd/s/ec085242e8b31cdc350a912a564defc264f2ca7adaa3036d708181badf465141" protocol=ttrpc version=3 Dec 16 12:36:38.078798 systemd[1]: Started cri-containerd-7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c.scope - libcontainer container 7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c. Dec 16 12:36:38.146747 containerd[1496]: time="2025-12-16T12:36:38.146696713Z" level=info msg="StartContainer for \"7effad0d9b0a0dd070b852d67101dce6ea90bc426cce9516e5fd313cc329a81c\" returns successfully" Dec 16 12:36:38.284883 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:36:38.285029 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:36:38.500831 kubelet[2656]: I1216 12:36:38.500753 2656 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-backend-key-pair\") pod \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " Dec 16 12:36:38.500831 kubelet[2656]: I1216 12:36:38.500802 2656 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-ca-bundle\") pod \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " Dec 16 12:36:38.500831 kubelet[2656]: I1216 12:36:38.500824 2656 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ntr\" (UniqueName: \"kubernetes.io/projected/ffda2e75-a631-43f2-adbc-14ba2a0b562c-kube-api-access-n6ntr\") pod \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\" (UID: \"ffda2e75-a631-43f2-adbc-14ba2a0b562c\") " Dec 16 12:36:38.515815 kubelet[2656]: I1216 12:36:38.515727 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ffda2e75-a631-43f2-adbc-14ba2a0b562c" (UID: "ffda2e75-a631-43f2-adbc-14ba2a0b562c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:36:38.516000 kubelet[2656]: I1216 12:36:38.515962 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ffda2e75-a631-43f2-adbc-14ba2a0b562c" (UID: "ffda2e75-a631-43f2-adbc-14ba2a0b562c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:36:38.516242 kubelet[2656]: I1216 12:36:38.516212 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffda2e75-a631-43f2-adbc-14ba2a0b562c-kube-api-access-n6ntr" (OuterVolumeSpecName: "kube-api-access-n6ntr") pod "ffda2e75-a631-43f2-adbc-14ba2a0b562c" (UID: "ffda2e75-a631-43f2-adbc-14ba2a0b562c"). InnerVolumeSpecName "kube-api-access-n6ntr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:36:38.601231 kubelet[2656]: I1216 12:36:38.601088 2656 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:36:38.601231 kubelet[2656]: I1216 12:36:38.601129 2656 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffda2e75-a631-43f2-adbc-14ba2a0b562c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:36:38.601231 kubelet[2656]: I1216 12:36:38.601140 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n6ntr\" (UniqueName: \"kubernetes.io/projected/ffda2e75-a631-43f2-adbc-14ba2a0b562c-kube-api-access-n6ntr\") on node \"localhost\" DevicePath \"\"" Dec 16 12:36:38.615678 systemd[1]: Removed slice kubepods-besteffort-podffda2e75_a631_43f2_adbc_14ba2a0b562c.slice - libcontainer container kubepods-besteffort-podffda2e75_a631_43f2_adbc_14ba2a0b562c.slice. Dec 16 12:36:38.662270 kubelet[2656]: I1216 12:36:38.661915 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bkdzq" podStartSLOduration=2.111342956 podStartE2EDuration="11.653735372s" podCreationTimestamp="2025-12-16 12:36:27 +0000 UTC" firstStartedPulling="2025-12-16 12:36:28.440929425 +0000 UTC m=+24.062998145" lastFinishedPulling="2025-12-16 12:36:37.983321841 +0000 UTC m=+33.605390561" observedRunningTime="2025-12-16 12:36:38.639026409 +0000 UTC m=+34.261095169" watchObservedRunningTime="2025-12-16 12:36:38.653735372 +0000 UTC m=+34.275804132" Dec 16 12:36:38.666806 systemd[1]: var-lib-kubelet-pods-ffda2e75\x2da631\x2d43f2\x2dadbc\x2d14ba2a0b562c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn6ntr.mount: Deactivated successfully. Dec 16 12:36:38.666906 systemd[1]: var-lib-kubelet-pods-ffda2e75\x2da631\x2d43f2\x2dadbc\x2d14ba2a0b562c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:36:38.725979 systemd[1]: Created slice kubepods-besteffort-pod5b572f9c_7a0a_4593_9200_fab41f2e68a5.slice - libcontainer container kubepods-besteffort-pod5b572f9c_7a0a_4593_9200_fab41f2e68a5.slice. Dec 16 12:36:38.804708 kubelet[2656]: I1216 12:36:38.804660 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b572f9c-7a0a-4593-9200-fab41f2e68a5-whisker-ca-bundle\") pod \"whisker-8f54d85bb-kf5n6\" (UID: \"5b572f9c-7a0a-4593-9200-fab41f2e68a5\") " pod="calico-system/whisker-8f54d85bb-kf5n6" Dec 16 12:36:38.804919 kubelet[2656]: I1216 12:36:38.804894 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b572f9c-7a0a-4593-9200-fab41f2e68a5-whisker-backend-key-pair\") pod \"whisker-8f54d85bb-kf5n6\" (UID: \"5b572f9c-7a0a-4593-9200-fab41f2e68a5\") " pod="calico-system/whisker-8f54d85bb-kf5n6" Dec 16 12:36:38.804989 kubelet[2656]: I1216 12:36:38.804946 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9j25\" (UniqueName: \"kubernetes.io/projected/5b572f9c-7a0a-4593-9200-fab41f2e68a5-kube-api-access-d9j25\") pod \"whisker-8f54d85bb-kf5n6\" (UID: \"5b572f9c-7a0a-4593-9200-fab41f2e68a5\") " pod="calico-system/whisker-8f54d85bb-kf5n6" Dec 16 12:36:39.035823 containerd[1496]: time="2025-12-16T12:36:39.035747566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f54d85bb-kf5n6,Uid:5b572f9c-7a0a-4593-9200-fab41f2e68a5,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:39.252851 systemd-networkd[1411]: cali78816a0cfec: Link UP Dec 16 12:36:39.253064 systemd-networkd[1411]: cali78816a0cfec: Gained carrier Dec 16 12:36:39.278164 containerd[1496]: 2025-12-16 12:36:39.076 [INFO][3792] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:36:39.278164 containerd[1496]: 2025-12-16 12:36:39.120 [INFO][3792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8f54d85bb--kf5n6-eth0 whisker-8f54d85bb- calico-system 5b572f9c-7a0a-4593-9200-fab41f2e68a5 864 0 2025-12-16 12:36:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8f54d85bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8f54d85bb-kf5n6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali78816a0cfec [] [] }} ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-" Dec 16 12:36:39.278164 containerd[1496]: 2025-12-16 12:36:39.120 [INFO][3792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.278164 containerd[1496]: 2025-12-16 12:36:39.192 [INFO][3807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" HandleID="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Workload="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.192 [INFO][3807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" HandleID="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Workload="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000508b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8f54d85bb-kf5n6", "timestamp":"2025-12-16 12:36:39.192768594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.192 [INFO][3807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.193 [INFO][3807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.193 [INFO][3807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.206 [INFO][3807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" host="localhost" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.213 [INFO][3807] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.219 [INFO][3807] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.222 [INFO][3807] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.224 [INFO][3807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:39.278382 containerd[1496]: 2025-12-16 12:36:39.225 [INFO][3807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" host="localhost" Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.228 [INFO][3807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263 Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.233 [INFO][3807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" host="localhost" Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.239 [INFO][3807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" host="localhost" Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.240 [INFO][3807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" host="localhost" Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.240 [INFO][3807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:39.279073 containerd[1496]: 2025-12-16 12:36:39.240 [INFO][3807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" HandleID="k8s-pod-network.96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Workload="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.279376 containerd[1496]: 2025-12-16 12:36:39.243 [INFO][3792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8f54d85bb--kf5n6-eth0", GenerateName:"whisker-8f54d85bb-", Namespace:"calico-system", SelfLink:"", UID:"5b572f9c-7a0a-4593-9200-fab41f2e68a5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f54d85bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8f54d85bb-kf5n6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78816a0cfec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:39.279376 containerd[1496]: 2025-12-16 12:36:39.243 [INFO][3792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.279464 containerd[1496]: 2025-12-16 12:36:39.244 [INFO][3792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78816a0cfec ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.279464 containerd[1496]: 2025-12-16 12:36:39.253 [INFO][3792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.279509 containerd[1496]: 2025-12-16 12:36:39.254 [INFO][3792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8f54d85bb--kf5n6-eth0", GenerateName:"whisker-8f54d85bb-", Namespace:"calico-system", SelfLink:"", UID:"5b572f9c-7a0a-4593-9200-fab41f2e68a5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f54d85bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263", Pod:"whisker-8f54d85bb-kf5n6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78816a0cfec", MAC:"02:3e:65:ef:31:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:39.279572 containerd[1496]: 2025-12-16 12:36:39.276 [INFO][3792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" Namespace="calico-system" Pod="whisker-8f54d85bb-kf5n6" WorkloadEndpoint="localhost-k8s-whisker--8f54d85bb--kf5n6-eth0" Dec 16 12:36:39.420625 containerd[1496]: time="2025-12-16T12:36:39.420426796Z" level=info msg="connecting to shim 96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263" address="unix:///run/containerd/s/c3fcbb4a9a59e5ed547e80b917e5e221ce54931b111e20f624c8aa2f81379ab4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:39.448787 systemd[1]: Started cri-containerd-96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263.scope - libcontainer container 96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263. Dec 16 12:36:39.460596 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:39.482461 containerd[1496]: time="2025-12-16T12:36:39.482413927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f54d85bb-kf5n6,Uid:5b572f9c-7a0a-4593-9200-fab41f2e68a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"96622cbdde97f8d9c0c5985e0446fec8e836c556af2d4ed5d568d3c14547e263\"" Dec 16 12:36:39.484403 containerd[1496]: time="2025-12-16T12:36:39.484157407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:36:39.611417 kubelet[2656]: I1216 12:36:39.611381 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:36:39.701212 containerd[1496]: time="2025-12-16T12:36:39.701081647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:39.703170 containerd[1496]: time="2025-12-16T12:36:39.703081087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:36:39.703411 containerd[1496]: time="2025-12-16T12:36:39.703119367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:36:39.703579 kubelet[2656]: E1216 12:36:39.703514 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:36:39.704366 kubelet[2656]: E1216 12:36:39.704320 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:36:39.713129 kubelet[2656]: E1216 12:36:39.713064 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f54d85bb-kf5n6_calico-system(5b572f9c-7a0a-4593-9200-fab41f2e68a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:39.715619 containerd[1496]: time="2025-12-16T12:36:39.715578730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:36:39.955362 containerd[1496]: time="2025-12-16T12:36:39.955018973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:39.956382 containerd[1496]: time="2025-12-16T12:36:39.956333774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:36:39.956474 containerd[1496]: time="2025-12-16T12:36:39.956360174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:36:39.956656 kubelet[2656]: E1216 12:36:39.956612 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:36:39.956710 kubelet[2656]: E1216 12:36:39.956670 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:36:39.956770 kubelet[2656]: E1216 12:36:39.956750 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f54d85bb-kf5n6_calico-system(5b572f9c-7a0a-4593-9200-fab41f2e68a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:39.956854 kubelet[2656]: E1216 12:36:39.956825 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f54d85bb-kf5n6" podUID="5b572f9c-7a0a-4593-9200-fab41f2e68a5" Dec 16 12:36:40.479897 kubelet[2656]: I1216 12:36:40.479840 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffda2e75-a631-43f2-adbc-14ba2a0b562c" path="/var/lib/kubelet/pods/ffda2e75-a631-43f2-adbc-14ba2a0b562c/volumes" Dec 16 12:36:40.615805 kubelet[2656]: E1216 12:36:40.615435 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f54d85bb-kf5n6" podUID="5b572f9c-7a0a-4593-9200-fab41f2e68a5" Dec 16 12:36:41.027734 systemd-networkd[1411]: cali78816a0cfec: Gained IPv6LL Dec 16 12:36:41.646283 kubelet[2656]: I1216 12:36:41.646206 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:36:42.302220 systemd-networkd[1411]: vxlan.calico: Link UP Dec 16 12:36:42.302227 systemd-networkd[1411]: vxlan.calico: Gained carrier Dec 16 12:36:43.780802 systemd-networkd[1411]: vxlan.calico: Gained IPv6LL Dec 16 12:36:44.251980 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:32934.service - OpenSSH per-connection server daemon (10.0.0.1:32934). Dec 16 12:36:44.344249 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 32934 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:36:44.346698 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:36:44.351403 systemd-logind[1480]: New session 8 of user core. Dec 16 12:36:44.362774 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:36:44.605507 sshd[4228]: Connection closed by 10.0.0.1 port 32934 Dec 16 12:36:44.605902 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:44.610624 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:32934.service: Deactivated successfully. Dec 16 12:36:44.613144 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:36:44.614972 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:36:44.616703 systemd-logind[1480]: Removed session 8. Dec 16 12:36:46.484377 containerd[1496]: time="2025-12-16T12:36:46.484325676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrhkw,Uid:9399c1e0-aa0f-4199-b50c-d1558534a15f,Namespace:kube-system,Attempt:0,}" Dec 16 12:36:46.627842 systemd-networkd[1411]: cali2feb72c93fe: Link UP Dec 16 12:36:46.628503 systemd-networkd[1411]: cali2feb72c93fe: Gained carrier Dec 16 12:36:46.656982 containerd[1496]: 2025-12-16 12:36:46.532 [INFO][4246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xrhkw-eth0 coredns-66bc5c9577- kube-system 9399c1e0-aa0f-4199-b50c-d1558534a15f 801 0 2025-12-16 12:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xrhkw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2feb72c93fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-" Dec 16 12:36:46.656982 containerd[1496]: 2025-12-16 12:36:46.532 [INFO][4246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.656982 containerd[1496]: 2025-12-16 12:36:46.566 [INFO][4256] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" HandleID="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Workload="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.567 [INFO][4256] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" HandleID="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Workload="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xrhkw", "timestamp":"2025-12-16 12:36:46.566908126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.567 [INFO][4256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.567 [INFO][4256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.567 [INFO][4256] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.579 [INFO][4256] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" host="localhost" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.589 [INFO][4256] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.597 [INFO][4256] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.600 [INFO][4256] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.603 [INFO][4256] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:46.657203 containerd[1496]: 2025-12-16 12:36:46.603 [INFO][4256] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" host="localhost" Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.606 [INFO][4256] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1 Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.613 [INFO][4256] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" host="localhost" Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.620 [INFO][4256] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" host="localhost" Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.620 [INFO][4256] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" host="localhost" Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.620 [INFO][4256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:46.657415 containerd[1496]: 2025-12-16 12:36:46.620 [INFO][4256] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" HandleID="k8s-pod-network.300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Workload="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.624 [INFO][4246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xrhkw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9399c1e0-aa0f-4199-b50c-d1558534a15f", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xrhkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2feb72c93fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.624 [INFO][4246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.624 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2feb72c93fe ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.628 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.634 [INFO][4246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xrhkw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9399c1e0-aa0f-4199-b50c-d1558534a15f", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1", Pod:"coredns-66bc5c9577-xrhkw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2feb72c93fe", MAC:"02:2a:1b:5d:c6:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:46.657527 containerd[1496]: 2025-12-16 12:36:46.651 [INFO][4246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" Namespace="kube-system" Pod="coredns-66bc5c9577-xrhkw" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xrhkw-eth0" Dec 16 12:36:46.684870 containerd[1496]: time="2025-12-16T12:36:46.684823220Z" level=info msg="connecting to shim 300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1" address="unix:///run/containerd/s/823cce2a4088dd3321832679e56813ad1a3fc2c2c9d3269a8b90fbef07c35065" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:46.712752 systemd[1]: Started cri-containerd-300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1.scope - libcontainer container 300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1. Dec 16 12:36:46.725285 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:46.753518 containerd[1496]: time="2025-12-16T12:36:46.753387268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xrhkw,Uid:9399c1e0-aa0f-4199-b50c-d1558534a15f,Namespace:kube-system,Attempt:0,} returns sandbox id \"300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1\"" Dec 16 12:36:46.760811 containerd[1496]: time="2025-12-16T12:36:46.760066428Z" level=info msg="CreateContainer within sandbox \"300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:36:46.770612 containerd[1496]: time="2025-12-16T12:36:46.770563030Z" level=info msg="Container 79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:46.773813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555132933.mount: Deactivated successfully. Dec 16 12:36:46.779706 containerd[1496]: time="2025-12-16T12:36:46.779641911Z" level=info msg="CreateContainer within sandbox \"300606f6b1a707cf272ca5955491b5e5503b1b66167e5dd0e175ce5a1c049bd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694\"" Dec 16 12:36:46.781405 containerd[1496]: time="2025-12-16T12:36:46.781371471Z" level=info msg="StartContainer for \"79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694\"" Dec 16 12:36:46.783402 containerd[1496]: time="2025-12-16T12:36:46.783296751Z" level=info msg="connecting to shim 79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694" address="unix:///run/containerd/s/823cce2a4088dd3321832679e56813ad1a3fc2c2c9d3269a8b90fbef07c35065" protocol=ttrpc version=3 Dec 16 12:36:46.807780 systemd[1]: Started cri-containerd-79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694.scope - libcontainer container 79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694. Dec 16 12:36:46.838979 containerd[1496]: time="2025-12-16T12:36:46.838859598Z" level=info msg="StartContainer for \"79bcba5e167badc75497ef588b0b7ccffafb44953cf13fe0677bab9a87132694\" returns successfully" Dec 16 12:36:47.485339 containerd[1496]: time="2025-12-16T12:36:47.485281709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pgg7z,Uid:9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:47.488810 containerd[1496]: time="2025-12-16T12:36:47.488767150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-pr8p8,Uid:22a17076-a5c0-4472-8265-e6aeee78b179,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:36:47.655997 systemd-networkd[1411]: calif26141c1572: Link UP Dec 16 12:36:47.656797 systemd-networkd[1411]: calif26141c1572: Gained carrier Dec 16 12:36:47.662641 kubelet[2656]: I1216 12:36:47.661883 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xrhkw" podStartSLOduration=37.661517168 podStartE2EDuration="37.661517168s" podCreationTimestamp="2025-12-16 12:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:47.660942288 +0000 UTC m=+43.283011008" watchObservedRunningTime="2025-12-16 12:36:47.661517168 +0000 UTC m=+43.283585888" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.544 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--pgg7z-eth0 goldmane-7c778bb748- calico-system 9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2 807 0 2025-12-16 12:36:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-pgg7z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif26141c1572 [] [] }} ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.545 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.587 [INFO][4393] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" HandleID="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Workload="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.587 [INFO][4393] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" HandleID="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Workload="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-pgg7z", "timestamp":"2025-12-16 12:36:47.58712816 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.587 [INFO][4393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.587 [INFO][4393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.587 [INFO][4393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.603 [INFO][4393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.611 [INFO][4393] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.618 [INFO][4393] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.621 [INFO][4393] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.624 [INFO][4393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.624 [INFO][4393] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.627 [INFO][4393] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6 Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.632 [INFO][4393] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4393] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" host="localhost" Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:47.674513 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4393] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" HandleID="k8s-pod-network.982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Workload="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.648 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pgg7z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-pgg7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif26141c1572", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.650 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.650 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif26141c1572 ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.657 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.658 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pgg7z-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6", Pod:"goldmane-7c778bb748-pgg7z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif26141c1572", MAC:"92:1f:d5:ee:a8:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:47.675097 containerd[1496]: 2025-12-16 12:36:47.672 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" Namespace="calico-system" Pod="goldmane-7c778bb748-pgg7z" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pgg7z-eth0" Dec 16 12:36:47.717636 containerd[1496]: time="2025-12-16T12:36:47.716895774Z" level=info msg="connecting to shim 982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6" address="unix:///run/containerd/s/49e6e196ae14a66bba48f3a7bef0c447fb8b2ebd2d4cb7259267021a94b6f788" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:47.747759 systemd-networkd[1411]: cali2feb72c93fe: Gained IPv6LL Dec 16 12:36:47.750046 systemd[1]: Started cri-containerd-982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6.scope - libcontainer container 982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6. Dec 16 12:36:47.767996 systemd-networkd[1411]: cali35db68877d4: Link UP Dec 16 12:36:47.768874 systemd-networkd[1411]: cali35db68877d4: Gained carrier Dec 16 12:36:47.770695 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.550 [INFO][4373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0 calico-apiserver-66f874bbcf- calico-apiserver 22a17076-a5c0-4472-8265-e6aeee78b179 806 0 2025-12-16 12:36:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66f874bbcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66f874bbcf-pr8p8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35db68877d4 [] [] }} ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.551 [INFO][4373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.603 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" HandleID="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Workload="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.606 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" HandleID="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Workload="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000482900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66f874bbcf-pr8p8", "timestamp":"2025-12-16 12:36:47.603219962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.606 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.642 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.704 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.714 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.731 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.735 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.739 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.739 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.743 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31 Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.753 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.761 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.761 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" host="localhost" Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.761 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:47.784495 containerd[1496]: 2025-12-16 12:36:47.761 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" HandleID="k8s-pod-network.603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Workload="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.764 [INFO][4373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0", GenerateName:"calico-apiserver-66f874bbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"22a17076-a5c0-4472-8265-e6aeee78b179", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f874bbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66f874bbcf-pr8p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35db68877d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.765 [INFO][4373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.765 [INFO][4373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35db68877d4 ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.768 [INFO][4373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.769 [INFO][4373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0", GenerateName:"calico-apiserver-66f874bbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"22a17076-a5c0-4472-8265-e6aeee78b179", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f874bbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31", Pod:"calico-apiserver-66f874bbcf-pr8p8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35db68877d4", MAC:"e6:47:b1:a1:97:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:47.785031 containerd[1496]: 2025-12-16 12:36:47.780 [INFO][4373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-pr8p8" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--pr8p8-eth0" Dec 16 12:36:47.810612 containerd[1496]: time="2025-12-16T12:36:47.810511345Z" level=info msg="connecting to shim 603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31" address="unix:///run/containerd/s/71dab237b8544faf70ea2f36a0d03d469e9f5c4422ff65947ea23f52c0fa7afb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:47.818129 containerd[1496]: time="2025-12-16T12:36:47.818082785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pgg7z,Uid:9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"982475b835f6e4b42d882912f22da28275378933f4508a8d27b90d39acce31b6\"" Dec 16 12:36:47.820026 containerd[1496]: time="2025-12-16T12:36:47.819962266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:36:47.849793 systemd[1]: Started cri-containerd-603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31.scope - libcontainer container 603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31. Dec 16 12:36:47.862685 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:47.891172 containerd[1496]: time="2025-12-16T12:36:47.891130833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-pr8p8,Uid:22a17076-a5c0-4472-8265-e6aeee78b179,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"603cd49919ed7baa7e86b2c6b5839f86a53312476eb68737311970c34f8a5d31\"" Dec 16 12:36:48.035059 containerd[1496]: time="2025-12-16T12:36:48.035000529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:48.036248 containerd[1496]: time="2025-12-16T12:36:48.036189609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:36:48.036248 containerd[1496]: time="2025-12-16T12:36:48.036230169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:36:48.036512 kubelet[2656]: E1216 12:36:48.036452 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:36:48.036512 kubelet[2656]: E1216 12:36:48.036511 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:36:48.036704 kubelet[2656]: E1216 12:36:48.036671 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pgg7z_calico-system(9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:48.036740 kubelet[2656]: E1216 12:36:48.036722 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2" Dec 16 12:36:48.037009 containerd[1496]: time="2025-12-16T12:36:48.036961089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:36:48.257801 containerd[1496]: time="2025-12-16T12:36:48.257731632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:48.258783 containerd[1496]: time="2025-12-16T12:36:48.258714232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:36:48.258878 containerd[1496]: time="2025-12-16T12:36:48.258838912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:36:48.259031 kubelet[2656]: E1216 12:36:48.258991 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:36:48.259105 kubelet[2656]: E1216 12:36:48.259043 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:36:48.259153 kubelet[2656]: E1216 12:36:48.259128 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66f874bbcf-pr8p8_calico-apiserver(22a17076-a5c0-4472-8265-e6aeee78b179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:48.259200 kubelet[2656]: E1216 12:36:48.259173 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" podUID="22a17076-a5c0-4472-8265-e6aeee78b179" Dec 16 12:36:48.480153 containerd[1496]: time="2025-12-16T12:36:48.479988254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9rch,Uid:1fff280a-2bf1-4f6b-8d2a-055392e26ba8,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:48.482521 containerd[1496]: time="2025-12-16T12:36:48.482339174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-h72xl,Uid:71fab9ca-3b60-4f2f-9864-f226dad6b716,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:36:48.606663 systemd-networkd[1411]: cali98b52504a0c: Link UP Dec 16 12:36:48.607316 systemd-networkd[1411]: cali98b52504a0c: Gained carrier Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.524 [INFO][4523] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v9rch-eth0 csi-node-driver- calico-system 1fff280a-2bf1-4f6b-8d2a-055392e26ba8 711 0 2025-12-16 12:36:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v9rch eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali98b52504a0c [] [] }} ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.524 [INFO][4523] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" HandleID="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Workload="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" HandleID="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Workload="localhost-k8s-csi--node--driver--v9rch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v9rch", "timestamp":"2025-12-16 12:36:48.556106542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.568 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.574 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.580 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.584 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.586 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.587 [INFO][4551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.589 [INFO][4551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629 Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.593 [INFO][4551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.600 [INFO][4551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.600 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" host="localhost" Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.600 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:48.626717 containerd[1496]: 2025-12-16 12:36:48.600 [INFO][4551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" HandleID="k8s-pod-network.26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Workload="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.602 [INFO][4523] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v9rch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fff280a-2bf1-4f6b-8d2a-055392e26ba8", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v9rch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98b52504a0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.603 [INFO][4523] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.603 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98b52504a0c ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.606 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.607 [INFO][4523] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v9rch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1fff280a-2bf1-4f6b-8d2a-055392e26ba8", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629", Pod:"csi-node-driver-v9rch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali98b52504a0c", MAC:"22:76:08:d8:86:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:48.628330 containerd[1496]: 2025-12-16 12:36:48.624 [INFO][4523] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" Namespace="calico-system" Pod="csi-node-driver-v9rch" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9rch-eth0" Dec 16 12:36:48.644675 kubelet[2656]: E1216 12:36:48.644130 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" podUID="22a17076-a5c0-4472-8265-e6aeee78b179" Dec 16 12:36:48.649421 kubelet[2656]: E1216 12:36:48.649373 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2" Dec 16 12:36:48.661157 containerd[1496]: time="2025-12-16T12:36:48.660736073Z" level=info msg="connecting to shim 26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629" address="unix:///run/containerd/s/e8d8e0b29260870fc895428918685b08b1b388d905ce987f6cbd72e2a8b47568" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:48.699772 systemd[1]: Started cri-containerd-26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629.scope - libcontainer container 26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629. Dec 16 12:36:48.721566 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:48.727255 systemd-networkd[1411]: calif6fa67eefb5: Link UP Dec 16 12:36:48.728329 systemd-networkd[1411]: calif6fa67eefb5: Gained carrier Dec 16 12:36:48.740278 containerd[1496]: time="2025-12-16T12:36:48.740165761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9rch,Uid:1fff280a-2bf1-4f6b-8d2a-055392e26ba8,Namespace:calico-system,Attempt:0,} returns sandbox id \"26b54c6a59d000dff420b0363ea4eefb975e62769ea24e8e57cf761748f36629\"" Dec 16 12:36:48.746808 containerd[1496]: time="2025-12-16T12:36:48.746765241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.524 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0 calico-apiserver-66f874bbcf- calico-apiserver 71fab9ca-3b60-4f2f-9864-f226dad6b716 802 0 2025-12-16 12:36:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66f874bbcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66f874bbcf-h72xl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6fa67eefb5 [] [] }} ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.524 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.556 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" HandleID="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Workload="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.557 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" HandleID="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Workload="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66f874bbcf-h72xl", "timestamp":"2025-12-16 12:36:48.556990262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.557 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.600 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.601 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.671 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.680 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.686 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.690 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.692 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.692 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.695 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3 Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.699 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.717 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.717 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" host="localhost" Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.717 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:48.749280 containerd[1496]: 2025-12-16 12:36:48.717 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" HandleID="k8s-pod-network.36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Workload="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.722 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0", GenerateName:"calico-apiserver-66f874bbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fab9ca-3b60-4f2f-9864-f226dad6b716", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f874bbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66f874bbcf-h72xl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fa67eefb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.722 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.723 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6fa67eefb5 ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.728 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.728 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0", GenerateName:"calico-apiserver-66f874bbcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fab9ca-3b60-4f2f-9864-f226dad6b716", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f874bbcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3", Pod:"calico-apiserver-66f874bbcf-h72xl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fa67eefb5", MAC:"9e:e3:d5:f3:f7:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:48.749870 containerd[1496]: 2025-12-16 12:36:48.742 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" Namespace="calico-apiserver" Pod="calico-apiserver-66f874bbcf-h72xl" WorkloadEndpoint="localhost-k8s-calico--apiserver--66f874bbcf--h72xl-eth0" Dec 16 12:36:48.777923 containerd[1496]: time="2025-12-16T12:36:48.777873245Z" level=info msg="connecting to shim 36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3" address="unix:///run/containerd/s/e7229c6a1c137816b7f3096f8a762d687427f1e9f6b4d89b8aa16ea1dd4d293d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:48.810783 systemd[1]: Started cri-containerd-36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3.scope - libcontainer container 36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3. Dec 16 12:36:48.822809 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:48.835746 systemd-networkd[1411]: calif26141c1572: Gained IPv6LL Dec 16 12:36:48.849377 containerd[1496]: time="2025-12-16T12:36:48.849342732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f874bbcf-h72xl,Uid:71fab9ca-3b60-4f2f-9864-f226dad6b716,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"36498744e3a2f154a76941e47a93e400462033640be63bc8d5397f107fc630b3\"" Dec 16 12:36:48.982208 containerd[1496]: time="2025-12-16T12:36:48.982136226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:48.983509 containerd[1496]: time="2025-12-16T12:36:48.983458266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:36:48.983584 containerd[1496]: time="2025-12-16T12:36:48.983461946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:36:48.983772 kubelet[2656]: E1216 12:36:48.983725 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:36:48.983772 kubelet[2656]: E1216 12:36:48.983769 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:36:48.984385 kubelet[2656]: E1216 12:36:48.983976 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:48.984414 containerd[1496]: time="2025-12-16T12:36:48.984032786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:36:49.199289 containerd[1496]: time="2025-12-16T12:36:49.199234046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:49.200452 containerd[1496]: time="2025-12-16T12:36:49.200406527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:36:49.200519 containerd[1496]: time="2025-12-16T12:36:49.200463847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:36:49.200720 kubelet[2656]: E1216 12:36:49.200682 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:36:49.200801 kubelet[2656]: E1216 12:36:49.200732 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:36:49.201268 containerd[1496]: time="2025-12-16T12:36:49.201039807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:36:49.201578 kubelet[2656]: E1216 12:36:49.201484 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66f874bbcf-h72xl_calico-apiserver(71fab9ca-3b60-4f2f-9864-f226dad6b716): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:49.201578 kubelet[2656]: E1216 12:36:49.201526 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" podUID="71fab9ca-3b60-4f2f-9864-f226dad6b716" Dec 16 12:36:49.400199 containerd[1496]: time="2025-12-16T12:36:49.400145866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:49.407662 containerd[1496]: time="2025-12-16T12:36:49.407588546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:36:49.407662 containerd[1496]: time="2025-12-16T12:36:49.407634186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:36:49.407885 kubelet[2656]: E1216 12:36:49.407831 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:36:49.407885 kubelet[2656]: E1216 12:36:49.407882 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:36:49.407988 kubelet[2656]: E1216 12:36:49.407958 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:49.408047 kubelet[2656]: E1216 12:36:49.407998 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:49.480275 containerd[1496]: time="2025-12-16T12:36:49.480108553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzf6q,Uid:ac18c2cb-4c77-47fc-8aba-04be84531916,Namespace:kube-system,Attempt:0,}" Dec 16 12:36:49.481402 containerd[1496]: time="2025-12-16T12:36:49.481346073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8489b7ddd5-dm4bl,Uid:80e0532e-7ec6-441f-8cab-1a7e0256384d,Namespace:calico-system,Attempt:0,}" Dec 16 12:36:49.599040 systemd-networkd[1411]: cali2949805b822: Link UP Dec 16 12:36:49.599581 systemd-networkd[1411]: cali2949805b822: Gained carrier Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.526 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0 calico-kube-controllers-8489b7ddd5- calico-system 80e0532e-7ec6-441f-8cab-1a7e0256384d 803 0 2025-12-16 12:36:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8489b7ddd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8489b7ddd5-dm4bl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2949805b822 [] [] }} ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.527 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.553 [INFO][4716] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" HandleID="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Workload="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.553 [INFO][4716] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" HandleID="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Workload="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8489b7ddd5-dm4bl", "timestamp":"2025-12-16 12:36:49.55334256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.553 [INFO][4716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.553 [INFO][4716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.553 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.565 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.571 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.575 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.578 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.581 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.581 [INFO][4716] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.583 [INFO][4716] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4 Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.587 [INFO][4716] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.594 [INFO][4716] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.594 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" host="localhost" Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.594 [INFO][4716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:49.619093 containerd[1496]: 2025-12-16 12:36:49.594 [INFO][4716] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" HandleID="k8s-pod-network.776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Workload="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.597 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0", GenerateName:"calico-kube-controllers-8489b7ddd5-", Namespace:"calico-system", SelfLink:"", UID:"80e0532e-7ec6-441f-8cab-1a7e0256384d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8489b7ddd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8489b7ddd5-dm4bl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2949805b822", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.597 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.597 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2949805b822 ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.601 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.602 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0", GenerateName:"calico-kube-controllers-8489b7ddd5-", Namespace:"calico-system", SelfLink:"", UID:"80e0532e-7ec6-441f-8cab-1a7e0256384d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8489b7ddd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4", Pod:"calico-kube-controllers-8489b7ddd5-dm4bl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2949805b822", MAC:"ae:4e:8f:6a:35:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:49.619696 containerd[1496]: 2025-12-16 12:36:49.614 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" Namespace="calico-system" Pod="calico-kube-controllers-8489b7ddd5-dm4bl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8489b7ddd5--dm4bl-eth0" Dec 16 12:36:49.621273 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:33022.service - OpenSSH per-connection server daemon (10.0.0.1:33022). Dec 16 12:36:49.655884 containerd[1496]: time="2025-12-16T12:36:49.655728890Z" level=info msg="connecting to shim 776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4" address="unix:///run/containerd/s/be8045b6b4906ffaf8c8a7f42963b4f9ceb9f41e00f2d521c91fea5cc14f1acb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:49.656792 kubelet[2656]: E1216 12:36:49.656728 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" podUID="71fab9ca-3b60-4f2f-9864-f226dad6b716" Dec 16 12:36:49.661831 kubelet[2656]: E1216 12:36:49.661791 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" podUID="22a17076-a5c0-4472-8265-e6aeee78b179" Dec 16 12:36:49.662885 kubelet[2656]: E1216 12:36:49.662724 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2" Dec 16 12:36:49.663208 kubelet[2656]: E1216 12:36:49.663170 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:49.668890 systemd-networkd[1411]: cali35db68877d4: Gained IPv6LL Dec 16 12:36:49.706791 systemd[1]: Started cri-containerd-776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4.scope - libcontainer container 776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4. Dec 16 12:36:49.725576 sshd[4737]: Accepted publickey for core from 10.0.0.1 port 33022 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:36:49.728238 sshd-session[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:36:49.736377 systemd-logind[1480]: New session 9 of user core. Dec 16 12:36:49.743231 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:36:49.747227 systemd-networkd[1411]: calibc4f604e662: Link UP Dec 16 12:36:49.747373 systemd-networkd[1411]: calibc4f604e662: Gained carrier Dec 16 12:36:49.754739 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.523 [INFO][4680] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rzf6q-eth0 coredns-66bc5c9577- kube-system ac18c2cb-4c77-47fc-8aba-04be84531916 804 0 2025-12-16 12:36:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rzf6q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc4f604e662 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.523 [INFO][4680] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.555 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" HandleID="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Workload="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.555 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" HandleID="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Workload="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rzf6q", "timestamp":"2025-12-16 12:36:49.5554358 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.555 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.594 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.595 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.671 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.688 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.700 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.705 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.712 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.712 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.714 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2 Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.722 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.738 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.739 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" host="localhost" Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.739 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:36:49.770402 containerd[1496]: 2025-12-16 12:36:49.739 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" HandleID="k8s-pod-network.3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Workload="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.742 [INFO][4680] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rzf6q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ac18c2cb-4c77-47fc-8aba-04be84531916", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rzf6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc4f604e662", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.743 [INFO][4680] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.743 [INFO][4680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc4f604e662 ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.747 [INFO][4680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.748 [INFO][4680] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rzf6q-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ac18c2cb-4c77-47fc-8aba-04be84531916", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 36, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2", Pod:"coredns-66bc5c9577-rzf6q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc4f604e662", MAC:"e2:2a:d0:46:de:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:36:49.770922 containerd[1496]: 2025-12-16 12:36:49.765 [INFO][4680] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" Namespace="kube-system" Pod="coredns-66bc5c9577-rzf6q" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rzf6q-eth0" Dec 16 12:36:49.809739 containerd[1496]: time="2025-12-16T12:36:49.809684945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8489b7ddd5-dm4bl,Uid:80e0532e-7ec6-441f-8cab-1a7e0256384d,Namespace:calico-system,Attempt:0,} returns sandbox id \"776afd107e3629aaf21a0f5d51a4db627e8ad5fb8e46636e6f4deb8f5605baa4\"" Dec 16 12:36:49.813569 containerd[1496]: time="2025-12-16T12:36:49.813457905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:36:49.888499 containerd[1496]: time="2025-12-16T12:36:49.888450152Z" level=info msg="connecting to shim 3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2" address="unix:///run/containerd/s/c2ca4cee1df297f80304d6448c0e7b8f939403cdc1c0dc8f4146705cac5300c6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:36:49.926225 systemd[1]: Started cri-containerd-3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2.scope - libcontainer container 3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2. Dec 16 12:36:49.944167 systemd-resolved[1415]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:36:49.949749 sshd[4785]: Connection closed by 10.0.0.1 port 33022 Dec 16 12:36:49.950176 sshd-session[4737]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:49.954013 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:33022.service: Deactivated successfully. Dec 16 12:36:49.956454 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:36:49.957603 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:36:49.961195 systemd-logind[1480]: Removed session 9. Dec 16 12:36:49.980780 containerd[1496]: time="2025-12-16T12:36:49.980743481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rzf6q,Uid:ac18c2cb-4c77-47fc-8aba-04be84531916,Namespace:kube-system,Attempt:0,} returns sandbox id \"3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2\"" Dec 16 12:36:50.007584 containerd[1496]: time="2025-12-16T12:36:50.007452884Z" level=info msg="CreateContainer within sandbox \"3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:36:50.048512 containerd[1496]: time="2025-12-16T12:36:50.048470167Z" level=info msg="Container 01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:36:50.057504 containerd[1496]: time="2025-12-16T12:36:50.057411408Z" level=info msg="CreateContainer within sandbox \"3873ed0995228332d26a5dd61b95f49ff9bcc2c63a2d11a09f4621145c4acbb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6\"" Dec 16 12:36:50.062438 containerd[1496]: time="2025-12-16T12:36:50.062123529Z" level=info msg="StartContainer for \"01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6\"" Dec 16 12:36:50.064276 containerd[1496]: time="2025-12-16T12:36:50.064227009Z" level=info msg="connecting to shim 01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6" address="unix:///run/containerd/s/c2ca4cee1df297f80304d6448c0e7b8f939403cdc1c0dc8f4146705cac5300c6" protocol=ttrpc version=3 Dec 16 12:36:50.076904 containerd[1496]: time="2025-12-16T12:36:50.076724290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:50.078686 containerd[1496]: time="2025-12-16T12:36:50.078619370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:36:50.078956 containerd[1496]: time="2025-12-16T12:36:50.078696090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:36:50.079950 kubelet[2656]: E1216 12:36:50.079895 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:36:50.081262 kubelet[2656]: E1216 12:36:50.079960 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:36:50.081262 kubelet[2656]: E1216 12:36:50.080052 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8489b7ddd5-dm4bl_calico-system(80e0532e-7ec6-441f-8cab-1a7e0256384d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:50.081262 kubelet[2656]: E1216 12:36:50.080095 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" podUID="80e0532e-7ec6-441f-8cab-1a7e0256384d" Dec 16 12:36:50.091826 systemd[1]: Started cri-containerd-01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6.scope - libcontainer container 01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6. Dec 16 12:36:50.116771 systemd-networkd[1411]: cali98b52504a0c: Gained IPv6LL Dec 16 12:36:50.131799 containerd[1496]: time="2025-12-16T12:36:50.131745335Z" level=info msg="StartContainer for \"01020d7caf3e6dc65365a4727cd2e6003c69e75ddb76be0a138b16fe3d7aedd6\" returns successfully" Dec 16 12:36:50.435727 systemd-networkd[1411]: calif6fa67eefb5: Gained IPv6LL Dec 16 12:36:50.667825 kubelet[2656]: E1216 12:36:50.667780 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" podUID="80e0532e-7ec6-441f-8cab-1a7e0256384d" Dec 16 12:36:50.668742 kubelet[2656]: E1216 12:36:50.668704 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" podUID="71fab9ca-3b60-4f2f-9864-f226dad6b716" Dec 16 12:36:50.669715 kubelet[2656]: E1216 12:36:50.669650 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:36:50.752737 kubelet[2656]: I1216 12:36:50.752227 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rzf6q" podStartSLOduration=40.75220595 podStartE2EDuration="40.75220595s" podCreationTimestamp="2025-12-16 12:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:36:50.75104299 +0000 UTC m=+46.373111710" watchObservedRunningTime="2025-12-16 12:36:50.75220595 +0000 UTC m=+46.374274670" Dec 16 12:36:51.139897 systemd-networkd[1411]: calibc4f604e662: Gained IPv6LL Dec 16 12:36:51.267727 systemd-networkd[1411]: cali2949805b822: Gained IPv6LL Dec 16 12:36:51.670361 kubelet[2656]: E1216 12:36:51.669910 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" podUID="80e0532e-7ec6-441f-8cab-1a7e0256384d" Dec 16 12:36:54.966514 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:57000.service - OpenSSH per-connection server daemon (10.0.0.1:57000). Dec 16 12:36:55.028861 sshd[4918]: Accepted publickey for core from 10.0.0.1 port 57000 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:36:55.030426 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:36:55.036500 systemd-logind[1480]: New session 10 of user core. Dec 16 12:36:55.043245 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:36:55.210320 sshd[4921]: Connection closed by 10.0.0.1 port 57000 Dec 16 12:36:55.217020 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:55.226153 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:57000.service: Deactivated successfully. Dec 16 12:36:55.229322 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:36:55.230355 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:36:55.233292 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:57008.service - OpenSSH per-connection server daemon (10.0.0.1:57008). Dec 16 12:36:55.234221 systemd-logind[1480]: Removed session 10. Dec 16 12:36:55.290177 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 57008 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:36:55.291153 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:36:55.298610 systemd-logind[1480]: New session 11 of user core. Dec 16 12:36:55.312801 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:36:55.479985 containerd[1496]: time="2025-12-16T12:36:55.478775550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:36:55.521746 sshd[4938]: Connection closed by 10.0.0.1 port 57008 Dec 16 12:36:55.523208 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:55.541264 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:57008.service: Deactivated successfully. Dec 16 12:36:55.543198 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:36:55.544663 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:36:55.547684 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:57018.service - OpenSSH per-connection server daemon (10.0.0.1:57018). Dec 16 12:36:55.550489 systemd-logind[1480]: Removed session 11. Dec 16 12:36:55.603420 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 57018 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:36:55.604756 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:36:55.608838 systemd-logind[1480]: New session 12 of user core. Dec 16 12:36:55.621804 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:36:55.681696 containerd[1496]: time="2025-12-16T12:36:55.681653683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:55.682590 containerd[1496]: time="2025-12-16T12:36:55.682542563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:36:55.682645 containerd[1496]: time="2025-12-16T12:36:55.682588643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:36:55.682780 kubelet[2656]: E1216 12:36:55.682744 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:36:55.683532 kubelet[2656]: E1216 12:36:55.682789 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:36:55.683532 kubelet[2656]: E1216 12:36:55.682867 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8f54d85bb-kf5n6_calico-system(5b572f9c-7a0a-4593-9200-fab41f2e68a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:55.684636 containerd[1496]: time="2025-12-16T12:36:55.684603843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:36:55.770918 sshd[4952]: Connection closed by 10.0.0.1 port 57018 Dec 16 12:36:55.770686 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Dec 16 12:36:55.775173 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:57018.service: Deactivated successfully. Dec 16 12:36:55.779165 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:36:55.780246 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:36:55.781927 systemd-logind[1480]: Removed session 12. Dec 16 12:36:55.903451 containerd[1496]: time="2025-12-16T12:36:55.903408938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:36:55.904388 containerd[1496]: time="2025-12-16T12:36:55.904354298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:36:55.904511 containerd[1496]: time="2025-12-16T12:36:55.904384938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:36:55.904627 kubelet[2656]: E1216 12:36:55.904590 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:36:55.904685 kubelet[2656]: E1216 12:36:55.904638 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:36:55.904749 kubelet[2656]: E1216 12:36:55.904726 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8f54d85bb-kf5n6_calico-system(5b572f9c-7a0a-4593-9200-fab41f2e68a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:36:55.905155 kubelet[2656]: E1216 12:36:55.905122 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f54d85bb-kf5n6" podUID="5b572f9c-7a0a-4593-9200-fab41f2e68a5" Dec 16 12:37:00.794083 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:57054.service - OpenSSH per-connection server daemon (10.0.0.1:57054). Dec 16 12:37:00.856620 sshd[4967]: Accepted publickey for core from 10.0.0.1 port 57054 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:00.858881 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:00.864172 systemd-logind[1480]: New session 13 of user core. Dec 16 12:37:00.870747 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:37:01.030326 sshd[4970]: Connection closed by 10.0.0.1 port 57054 Dec 16 12:37:01.032575 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:01.043624 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:57054.service: Deactivated successfully. Dec 16 12:37:01.047329 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:37:01.048402 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:37:01.050806 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:36670.service - OpenSSH per-connection server daemon (10.0.0.1:36670). Dec 16 12:37:01.053206 systemd-logind[1480]: Removed session 13. Dec 16 12:37:01.114826 sshd[4984]: Accepted publickey for core from 10.0.0.1 port 36670 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:01.116298 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:01.121624 systemd-logind[1480]: New session 14 of user core. Dec 16 12:37:01.129771 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:37:01.342846 sshd[4987]: Connection closed by 10.0.0.1 port 36670 Dec 16 12:37:01.343148 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:01.354997 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:36670.service: Deactivated successfully. Dec 16 12:37:01.357821 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:37:01.359096 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:37:01.361250 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:36672.service - OpenSSH per-connection server daemon (10.0.0.1:36672). Dec 16 12:37:01.362172 systemd-logind[1480]: Removed session 14. Dec 16 12:37:01.414281 sshd[4999]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:01.415744 sshd-session[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:01.420918 systemd-logind[1480]: New session 15 of user core. Dec 16 12:37:01.434777 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:37:02.117613 sshd[5002]: Connection closed by 10.0.0.1 port 36672 Dec 16 12:37:02.120458 sshd-session[4999]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:02.129305 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:36672.service: Deactivated successfully. Dec 16 12:37:02.131629 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:37:02.134992 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:37:02.138479 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:36684.service - OpenSSH per-connection server daemon (10.0.0.1:36684). Dec 16 12:37:02.140503 systemd-logind[1480]: Removed session 15. Dec 16 12:37:02.209624 sshd[5020]: Accepted publickey for core from 10.0.0.1 port 36684 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:02.211344 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:02.216025 systemd-logind[1480]: New session 16 of user core. Dec 16 12:37:02.225780 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:37:02.599068 sshd[5023]: Connection closed by 10.0.0.1 port 36684 Dec 16 12:37:02.599602 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:02.614641 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:36684.service: Deactivated successfully. Dec 16 12:37:02.616698 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:37:02.620140 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:37:02.625755 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:36700.service - OpenSSH per-connection server daemon (10.0.0.1:36700). Dec 16 12:37:02.628693 systemd-logind[1480]: Removed session 16. Dec 16 12:37:02.684761 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 36700 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:02.686273 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:02.691673 systemd-logind[1480]: New session 17 of user core. Dec 16 12:37:02.704295 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:37:02.842783 sshd[5044]: Connection closed by 10.0.0.1 port 36700 Dec 16 12:37:02.843153 sshd-session[5041]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:02.848017 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:36700.service: Deactivated successfully. Dec 16 12:37:02.850125 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:37:02.851213 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:37:02.852427 systemd-logind[1480]: Removed session 17. Dec 16 12:37:03.478298 containerd[1496]: time="2025-12-16T12:37:03.478244917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:37:03.708722 containerd[1496]: time="2025-12-16T12:37:03.708672166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:03.719613 containerd[1496]: time="2025-12-16T12:37:03.719527926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:37:03.719738 containerd[1496]: time="2025-12-16T12:37:03.719589886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:37:03.719857 kubelet[2656]: E1216 12:37:03.719819 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:37:03.720183 kubelet[2656]: E1216 12:37:03.719866 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:37:03.720183 kubelet[2656]: E1216 12:37:03.719951 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pgg7z_calico-system(9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:03.720183 kubelet[2656]: E1216 12:37:03.719981 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2" Dec 16 12:37:04.479140 containerd[1496]: time="2025-12-16T12:37:04.479071234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:37:04.695694 containerd[1496]: time="2025-12-16T12:37:04.695589002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:04.698887 containerd[1496]: time="2025-12-16T12:37:04.698832562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:37:04.699049 containerd[1496]: time="2025-12-16T12:37:04.698929122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:37:04.699106 kubelet[2656]: E1216 12:37:04.699069 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:37:04.699106 kubelet[2656]: E1216 12:37:04.699126 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:37:04.700155 kubelet[2656]: E1216 12:37:04.699304 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66f874bbcf-pr8p8_calico-apiserver(22a17076-a5c0-4472-8265-e6aeee78b179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:04.700155 kubelet[2656]: E1216 12:37:04.699347 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-pr8p8" podUID="22a17076-a5c0-4472-8265-e6aeee78b179" Dec 16 12:37:04.700246 containerd[1496]: time="2025-12-16T12:37:04.699971882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:37:04.911966 containerd[1496]: time="2025-12-16T12:37:04.911917730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:04.913019 containerd[1496]: time="2025-12-16T12:37:04.912946370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:37:04.913019 containerd[1496]: time="2025-12-16T12:37:04.912989690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:37:04.913496 kubelet[2656]: E1216 12:37:04.913157 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:37:04.913496 kubelet[2656]: E1216 12:37:04.913206 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:37:04.913496 kubelet[2656]: E1216 12:37:04.913328 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:04.914152 containerd[1496]: time="2025-12-16T12:37:04.913524330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:37:05.133955 containerd[1496]: time="2025-12-16T12:37:05.133894058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:05.146633 containerd[1496]: time="2025-12-16T12:37:05.146528058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:37:05.146768 containerd[1496]: time="2025-12-16T12:37:05.146642978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:37:05.146891 kubelet[2656]: E1216 12:37:05.146818 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:37:05.146891 kubelet[2656]: E1216 12:37:05.146885 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:37:05.147141 kubelet[2656]: E1216 12:37:05.147040 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8489b7ddd5-dm4bl_calico-system(80e0532e-7ec6-441f-8cab-1a7e0256384d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:05.147141 kubelet[2656]: E1216 12:37:05.147086 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8489b7ddd5-dm4bl" podUID="80e0532e-7ec6-441f-8cab-1a7e0256384d" Dec 16 12:37:05.147372 containerd[1496]: time="2025-12-16T12:37:05.147318818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:37:05.364112 containerd[1496]: time="2025-12-16T12:37:05.364042386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:05.365141 containerd[1496]: time="2025-12-16T12:37:05.365094346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:37:05.365206 containerd[1496]: time="2025-12-16T12:37:05.365139506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:37:05.365406 kubelet[2656]: E1216 12:37:05.365333 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:37:05.365406 kubelet[2656]: E1216 12:37:05.365402 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:37:05.365635 kubelet[2656]: E1216 12:37:05.365606 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-66f874bbcf-h72xl_calico-apiserver(71fab9ca-3b60-4f2f-9864-f226dad6b716): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:05.365786 kubelet[2656]: E1216 12:37:05.365741 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f874bbcf-h72xl" podUID="71fab9ca-3b60-4f2f-9864-f226dad6b716" Dec 16 12:37:05.366214 containerd[1496]: time="2025-12-16T12:37:05.366124586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:37:05.588456 containerd[1496]: time="2025-12-16T12:37:05.588355793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:37:05.589445 containerd[1496]: time="2025-12-16T12:37:05.589395873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:37:05.589445 containerd[1496]: time="2025-12-16T12:37:05.589466753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:37:05.589701 kubelet[2656]: E1216 12:37:05.589638 2656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:37:05.589799 kubelet[2656]: E1216 12:37:05.589710 2656 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:37:05.589912 kubelet[2656]: E1216 12:37:05.589850 2656 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v9rch_calico-system(1fff280a-2bf1-4f6b-8d2a-055392e26ba8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:37:05.589912 kubelet[2656]: E1216 12:37:05.589897 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v9rch" podUID="1fff280a-2bf1-4f6b-8d2a-055392e26ba8" Dec 16 12:37:07.855370 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:36716.service - OpenSSH per-connection server daemon (10.0.0.1:36716). Dec 16 12:37:07.915165 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:07.916595 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:07.922041 systemd-logind[1480]: New session 18 of user core. Dec 16 12:37:07.929766 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:37:08.058258 sshd[5067]: Connection closed by 10.0.0.1 port 36716 Dec 16 12:37:08.058895 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:08.063373 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:36716.service: Deactivated successfully. Dec 16 12:37:08.065192 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:37:08.065891 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:37:08.067156 systemd-logind[1480]: Removed session 18. Dec 16 12:37:10.481121 kubelet[2656]: E1216 12:37:10.481031 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f54d85bb-kf5n6" podUID="5b572f9c-7a0a-4593-9200-fab41f2e68a5" Dec 16 12:37:13.075751 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:43244.service - OpenSSH per-connection server daemon (10.0.0.1:43244). Dec 16 12:37:13.145668 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 43244 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:37:13.147260 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:13.154163 systemd-logind[1480]: New session 19 of user core. Dec 16 12:37:13.164749 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:37:13.344576 sshd[5113]: Connection closed by 10.0.0.1 port 43244 Dec 16 12:37:13.345284 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:13.349695 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:37:13.349956 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:43244.service: Deactivated successfully. Dec 16 12:37:13.352018 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:37:13.354108 systemd-logind[1480]: Removed session 19. Dec 16 12:37:14.479318 kubelet[2656]: E1216 12:37:14.479258 2656 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pgg7z" podUID="9c610c1e-d14d-4a7b-8ace-6bdd6870d6a2"