Dec 16 12:32:41.787475 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:32:41.787496 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:32:41.787506 kernel: KASLR enabled Dec 16 12:32:41.787512 kernel: efi: EFI v2.7 by EDK II Dec 16 12:32:41.787517 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 16 12:32:41.787523 kernel: random: crng init done Dec 16 12:32:41.787529 kernel: secureboot: Secure boot disabled Dec 16 12:32:41.787535 kernel: ACPI: Early table checksum verification disabled Dec 16 12:32:41.787541 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 16 12:32:41.787548 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:32:41.787554 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787560 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787565 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787571 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787579 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787587 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787593 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787600 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787606 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:32:41.787612 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:32:41.787618 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:32:41.787624 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:41.787630 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 16 12:32:41.787636 kernel: Zone ranges: Dec 16 12:32:41.787642 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:41.787649 kernel: DMA32 empty Dec 16 12:32:41.787655 kernel: Normal empty Dec 16 12:32:41.787661 kernel: Device empty Dec 16 12:32:41.787667 kernel: Movable zone start for each node Dec 16 12:32:41.787673 kernel: Early memory node ranges Dec 16 12:32:41.787678 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 16 12:32:41.787684 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 16 12:32:41.787690 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 16 12:32:41.787696 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 16 12:32:41.787702 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 16 12:32:41.787708 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 16 12:32:41.787713 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 16 12:32:41.787721 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 16 12:32:41.787726 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 16 12:32:41.787732 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 16 12:32:41.787741 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 16 12:32:41.787747 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 16 12:32:41.787754 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:32:41.787761 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:32:41.787768 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:32:41.787774 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 16 12:32:41.787781 kernel: psci: probing for conduit method from ACPI. Dec 16 12:32:41.787787 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:32:41.787793 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:32:41.787800 kernel: psci: Trusted OS migration not required Dec 16 12:32:41.787807 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:32:41.787813 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:32:41.787819 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:32:41.787828 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:32:41.787835 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:32:41.787841 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:32:41.787848 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:32:41.787854 kernel: CPU features: detected: Spectre-v4 Dec 16 12:32:41.787861 kernel: CPU features: detected: Spectre-BHB Dec 16 12:32:41.787868 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:32:41.787874 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:32:41.787881 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:32:41.787887 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:32:41.787893 kernel: alternatives: applying boot alternatives Dec 16 12:32:41.787901 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:32:41.787909 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:32:41.787916 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:32:41.787922 kernel: Fallback order for Node 0: 0 Dec 16 12:32:41.787928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:32:41.787935 kernel: Policy zone: DMA Dec 16 12:32:41.787941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:32:41.787948 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:32:41.787954 kernel: software IO TLB: area num 4. Dec 16 12:32:41.787961 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:32:41.787967 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 16 12:32:41.787974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:32:41.787981 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:32:41.787989 kernel: rcu: RCU event tracing is enabled. Dec 16 12:32:41.787996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:32:41.788002 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:32:41.788009 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:32:41.788016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:32:41.788022 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:32:41.788028 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:32:41.788035 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:32:41.788041 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:32:41.788048 kernel: GICv3: 256 SPIs implemented Dec 16 12:32:41.788055 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:32:41.788062 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:32:41.788068 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:32:41.788074 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:32:41.788080 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:32:41.788102 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:32:41.788122 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:32:41.788129 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:32:41.788136 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:32:41.788143 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:32:41.788149 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:32:41.788155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:41.788164 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:32:41.788170 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:32:41.788177 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:32:41.788183 kernel: arm-pv: using stolen time PV Dec 16 12:32:41.788190 kernel: Console: colour dummy device 80x25 Dec 16 12:32:41.788196 kernel: ACPI: Core revision 20240827 Dec 16 12:32:41.788203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:32:41.788210 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:32:41.788216 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:32:41.788223 kernel: landlock: Up and running. Dec 16 12:32:41.788231 kernel: SELinux: Initializing. Dec 16 12:32:41.788237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:32:41.788244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:32:41.788250 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:32:41.788257 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:32:41.788264 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:32:41.788271 kernel: Remapping and enabling EFI services. Dec 16 12:32:41.788277 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:32:41.788284 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:32:41.788297 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:32:41.788304 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:32:41.788311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:41.788319 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:32:41.788332 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:32:41.788339 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:32:41.788346 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:32:41.788353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:41.788362 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:32:41.788369 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:32:41.788376 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:32:41.788383 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:32:41.788390 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:32:41.788397 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:32:41.788404 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:32:41.788411 kernel: SMP: Total of 4 processors activated. Dec 16 12:32:41.788417 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:32:41.788426 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:32:41.788433 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:32:41.788440 kernel: CPU features: detected: Common not Private translations Dec 16 12:32:41.788447 kernel: CPU features: detected: CRC32 instructions Dec 16 12:32:41.788454 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:32:41.788461 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:32:41.788468 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:32:41.788474 kernel: CPU features: detected: Privileged Access Never Dec 16 12:32:41.788481 kernel: CPU features: detected: RAS Extension Support Dec 16 12:32:41.788490 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:32:41.788497 kernel: alternatives: applying system-wide alternatives Dec 16 12:32:41.788504 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:32:41.788512 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 16 12:32:41.788519 kernel: devtmpfs: initialized Dec 16 12:32:41.788526 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:32:41.788537 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:32:41.788544 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:32:41.788551 kernel: 0 pages in range for non-PLT usage Dec 16 12:32:41.788560 kernel: 508400 pages in range for PLT usage Dec 16 12:32:41.788567 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:32:41.788574 kernel: SMBIOS 3.0.0 present. Dec 16 12:32:41.788581 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:32:41.788588 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:32:41.788595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:32:41.788602 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:32:41.788609 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:32:41.788616 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:32:41.788625 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:32:41.788632 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Dec 16 12:32:41.788639 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:32:41.788646 kernel: cpuidle: using governor menu Dec 16 12:32:41.788653 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:32:41.788660 kernel: ASID allocator initialised with 32768 entries Dec 16 12:32:41.788667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:32:41.788674 kernel: Serial: AMBA PL011 UART driver Dec 16 12:32:41.788681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:32:41.788690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:32:41.788697 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:32:41.788704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:32:41.788711 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:32:41.788719 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:32:41.788725 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:32:41.788733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:32:41.788740 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:32:41.788747 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:32:41.788755 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:32:41.788762 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:32:41.788769 kernel: ACPI: Interpreter enabled Dec 16 12:32:41.788776 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:32:41.788782 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:32:41.788789 kernel: ACPI: CPU0 has been hot-added Dec 16 12:32:41.788796 kernel: ACPI: CPU1 has been hot-added Dec 16 12:32:41.788803 kernel: ACPI: CPU2 has been hot-added Dec 16 12:32:41.788810 kernel: ACPI: CPU3 has been hot-added Dec 16 12:32:41.788817 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:32:41.788825 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:32:41.788832 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:32:41.789134 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:32:41.789211 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:32:41.789270 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:32:41.789333 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:32:41.789396 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:32:41.789410 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:32:41.789418 kernel: PCI host bridge to bus 0000:00 Dec 16 12:32:41.789498 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:32:41.789552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:32:41.789603 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:32:41.789654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:32:41.789732 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:32:41.789805 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:32:41.789866 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:32:41.789926 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:32:41.789984 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:32:41.790042 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:32:41.790099 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:32:41.790183 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:32:41.790240 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:32:41.790291 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:32:41.790351 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:32:41.790361 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:32:41.790369 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:32:41.790376 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:32:41.790383 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:32:41.790393 kernel: iommu: Default domain type: Translated Dec 16 12:32:41.790400 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:32:41.790407 kernel: efivars: Registered efivars operations Dec 16 12:32:41.790414 kernel: vgaarb: loaded Dec 16 12:32:41.790421 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:32:41.790428 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:32:41.790435 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:32:41.790442 kernel: pnp: PnP ACPI init Dec 16 12:32:41.790514 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:32:41.790526 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:32:41.790533 kernel: NET: Registered PF_INET protocol family Dec 16 12:32:41.790540 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:32:41.790547 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:32:41.790554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:32:41.790562 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:32:41.790568 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:32:41.790575 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:32:41.790584 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:32:41.790591 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:32:41.790604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:32:41.790611 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:32:41.790618 kernel: kvm [1]: HYP mode not available Dec 16 12:32:41.790625 kernel: Initialise system trusted keyrings Dec 16 12:32:41.790632 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:32:41.790639 kernel: Key type asymmetric registered Dec 16 12:32:41.790646 kernel: Asymmetric key parser 'x509' registered Dec 16 12:32:41.790655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:32:41.790662 kernel: io scheduler mq-deadline registered Dec 16 12:32:41.790669 kernel: io scheduler kyber registered Dec 16 12:32:41.790676 kernel: io scheduler bfq registered Dec 16 12:32:41.790683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:32:41.790690 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:32:41.790698 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:32:41.790759 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:32:41.790769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:32:41.790778 kernel: thunder_xcv, ver 1.0 Dec 16 12:32:41.790785 kernel: thunder_bgx, ver 1.0 Dec 16 12:32:41.790792 kernel: nicpf, ver 1.0 Dec 16 12:32:41.790799 kernel: nicvf, ver 1.0 Dec 16 12:32:41.790867 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:32:41.790924 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:32:41 UTC (1765888361) Dec 16 12:32:41.790933 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:32:41.790941 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:32:41.790949 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:32:41.790956 kernel: watchdog: NMI not fully supported Dec 16 12:32:41.790963 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:32:41.790970 kernel: Segment Routing with IPv6 Dec 16 12:32:41.790977 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:32:41.790984 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:32:41.790991 kernel: Key type dns_resolver registered Dec 16 12:32:41.790998 kernel: registered taskstats version 1 Dec 16 12:32:41.791006 kernel: Loading compiled-in X.509 certificates Dec 16 12:32:41.791013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:32:41.791021 kernel: Demotion targets for Node 0: null Dec 16 12:32:41.791028 kernel: Key type .fscrypt registered Dec 16 12:32:41.791035 kernel: Key type fscrypt-provisioning registered Dec 16 12:32:41.791041 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:32:41.791048 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:32:41.791055 kernel: ima: No architecture policies found Dec 16 12:32:41.791062 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:32:41.791069 kernel: clk: Disabling unused clocks Dec 16 12:32:41.791076 kernel: PM: genpd: Disabling unused power domains Dec 16 12:32:41.791084 kernel: Warning: unable to open an initial console. Dec 16 12:32:41.791091 kernel: Freeing unused kernel memory: 39552K Dec 16 12:32:41.791098 kernel: Run /init as init process Dec 16 12:32:41.791105 kernel: with arguments: Dec 16 12:32:41.791127 kernel: /init Dec 16 12:32:41.791133 kernel: with environment: Dec 16 12:32:41.791140 kernel: HOME=/ Dec 16 12:32:41.791147 kernel: TERM=linux Dec 16 12:32:41.791155 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:32:41.791167 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:32:41.791175 systemd[1]: Detected virtualization kvm. Dec 16 12:32:41.791182 systemd[1]: Detected architecture arm64. Dec 16 12:32:41.791189 systemd[1]: Running in initrd. Dec 16 12:32:41.791196 systemd[1]: No hostname configured, using default hostname. Dec 16 12:32:41.791204 systemd[1]: Hostname set to . Dec 16 12:32:41.791211 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:32:41.791220 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:32:41.791227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:41.791235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:41.791243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:32:41.791250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:32:41.791258 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:32:41.791266 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:32:41.791276 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:32:41.791284 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:32:41.791291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:41.791299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:41.791306 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:32:41.791314 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:32:41.791321 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:32:41.791335 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:32:41.791346 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:32:41.791353 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:32:41.791361 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:32:41.791368 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:32:41.791376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:41.791383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:41.791391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:41.791398 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:32:41.791407 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:32:41.791415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:32:41.791422 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:32:41.791430 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:32:41.791438 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:32:41.791446 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:32:41.791453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:32:41.791461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:41.791468 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:32:41.791477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:41.791485 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:32:41.791493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:32:41.791519 systemd-journald[244]: Collecting audit messages is disabled. Dec 16 12:32:41.791559 systemd-journald[244]: Journal started Dec 16 12:32:41.791577 systemd-journald[244]: Runtime Journal (/run/log/journal/1361a39622004d989cfeaaa0ce512577) is 6M, max 48.5M, 42.4M free. Dec 16 12:32:41.797209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:32:41.783226 systemd-modules-load[245]: Inserted module 'overlay' Dec 16 12:32:41.799564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:41.800963 systemd-modules-load[245]: Inserted module 'br_netfilter' Dec 16 12:32:41.802476 kernel: Bridge firewalling registered Dec 16 12:32:41.802498 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:32:41.803698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:41.804904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:41.809542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:32:41.811213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:32:41.813910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:32:41.832031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:32:41.840370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:41.844796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:41.845600 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:32:41.848673 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:41.852866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:32:41.854001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:32:41.856718 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:32:41.884301 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:32:41.904434 systemd-resolved[286]: Positive Trust Anchors: Dec 16 12:32:41.904453 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:32:41.904484 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:32:41.909818 systemd-resolved[286]: Defaulting to hostname 'linux'. Dec 16 12:32:41.910941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:32:41.912959 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:41.967146 kernel: SCSI subsystem initialized Dec 16 12:32:41.972128 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:32:41.980149 kernel: iscsi: registered transport (tcp) Dec 16 12:32:41.994168 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:32:41.994192 kernel: QLogic iSCSI HBA Driver Dec 16 12:32:42.011777 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:32:42.037181 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:42.038661 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:32:42.088739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:32:42.090976 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:32:42.152138 kernel: raid6: neonx8 gen() 15789 MB/s Dec 16 12:32:42.169126 kernel: raid6: neonx4 gen() 15821 MB/s Dec 16 12:32:42.186124 kernel: raid6: neonx2 gen() 13239 MB/s Dec 16 12:32:42.203123 kernel: raid6: neonx1 gen() 10412 MB/s Dec 16 12:32:42.220122 kernel: raid6: int64x8 gen() 6903 MB/s Dec 16 12:32:42.237134 kernel: raid6: int64x4 gen() 7313 MB/s Dec 16 12:32:42.254128 kernel: raid6: int64x2 gen() 6098 MB/s Dec 16 12:32:42.271131 kernel: raid6: int64x1 gen() 5049 MB/s Dec 16 12:32:42.271145 kernel: raid6: using algorithm neonx4 gen() 15821 MB/s Dec 16 12:32:42.288160 kernel: raid6: .... xor() 12341 MB/s, rmw enabled Dec 16 12:32:42.288198 kernel: raid6: using neon recovery algorithm Dec 16 12:32:42.293478 kernel: xor: measuring software checksum speed Dec 16 12:32:42.293522 kernel: 8regs : 21618 MB/sec Dec 16 12:32:42.294133 kernel: 32regs : 21699 MB/sec Dec 16 12:32:42.295198 kernel: arm64_neon : 24295 MB/sec Dec 16 12:32:42.295212 kernel: xor: using function: arm64_neon (24295 MB/sec) Dec 16 12:32:42.347149 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:32:42.353912 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:32:42.356662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:42.398065 systemd-udevd[499]: Using default interface naming scheme 'v255'. Dec 16 12:32:42.402181 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:42.404002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:32:42.437288 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Dec 16 12:32:42.465526 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:32:42.468026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:32:42.541772 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:42.545890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:32:42.597155 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:32:42.599137 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:32:42.602150 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:32:42.602195 kernel: GPT:9289727 != 19775487 Dec 16 12:32:42.602205 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:32:42.602214 kernel: GPT:9289727 != 19775487 Dec 16 12:32:42.602667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:32:42.603725 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:32:42.602801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:42.606344 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:42.606310 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:42.608399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:42.637727 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:32:42.643357 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:32:42.650576 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:32:42.652863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:42.664878 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:32:42.666050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:32:42.674479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:32:42.675604 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:32:42.677437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:42.679236 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:32:42.681882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:32:42.683842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:32:42.711184 disk-uuid[591]: Primary Header is updated. Dec 16 12:32:42.711184 disk-uuid[591]: Secondary Entries is updated. Dec 16 12:32:42.711184 disk-uuid[591]: Secondary Header is updated. Dec 16 12:32:42.714684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:32:42.718158 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:42.722238 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:43.723350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:32:43.723904 disk-uuid[597]: The operation has completed successfully. Dec 16 12:32:43.752040 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:32:43.752154 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:32:43.776383 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:32:43.807231 sh[612]: Success Dec 16 12:32:43.820604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:32:43.820657 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:32:43.820688 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:32:43.828142 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:32:43.856347 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:32:43.858286 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:32:43.876879 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:32:43.883505 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Dec 16 12:32:43.883557 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:32:43.883568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:43.888374 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:32:43.888393 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:32:43.889493 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:32:43.890841 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:32:43.892178 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:32:43.893071 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:32:43.894804 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:32:43.914211 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Dec 16 12:32:43.918339 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:43.918434 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:43.921150 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:43.921206 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:43.926126 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:43.926206 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:32:43.928741 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:32:44.008202 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:32:44.011454 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:32:44.034301 ignition[699]: Ignition 2.22.0 Dec 16 12:32:44.034323 ignition[699]: Stage: fetch-offline Dec 16 12:32:44.034371 ignition[699]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:44.034379 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:44.034469 ignition[699]: parsed url from cmdline: "" Dec 16 12:32:44.034473 ignition[699]: no config URL provided Dec 16 12:32:44.034478 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:32:44.034484 ignition[699]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:32:44.034507 ignition[699]: op(1): [started] loading QEMU firmware config module Dec 16 12:32:44.034512 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:32:44.040395 ignition[699]: op(1): [finished] loading QEMU firmware config module Dec 16 12:32:44.054235 systemd-networkd[803]: lo: Link UP Dec 16 12:32:44.054248 systemd-networkd[803]: lo: Gained carrier Dec 16 12:32:44.054929 systemd-networkd[803]: Enumeration completed Dec 16 12:32:44.055061 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:32:44.055348 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:44.055353 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:32:44.056347 systemd-networkd[803]: eth0: Link UP Dec 16 12:32:44.056446 systemd-networkd[803]: eth0: Gained carrier Dec 16 12:32:44.056455 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:44.056526 systemd[1]: Reached target network.target - Network. Dec 16 12:32:44.079171 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:32:44.100038 ignition[699]: parsing config with SHA512: 54bc419b022081626c3b233bf76c80c8b764a528ae3fc9b44c128fb9d86452e57ba56f3c091837d4a494c26fdaef6c639a984f117086d2526df2cf3b5254d358 Dec 16 12:32:44.105538 unknown[699]: fetched base config from "system" Dec 16 12:32:44.105550 unknown[699]: fetched user config from "qemu" Dec 16 12:32:44.105961 ignition[699]: fetch-offline: fetch-offline passed Dec 16 12:32:44.108294 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:32:44.106028 ignition[699]: Ignition finished successfully Dec 16 12:32:44.110084 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:32:44.110979 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:32:44.140691 ignition[811]: Ignition 2.22.0 Dec 16 12:32:44.140710 ignition[811]: Stage: kargs Dec 16 12:32:44.140845 ignition[811]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:44.140854 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:44.141609 ignition[811]: kargs: kargs passed Dec 16 12:32:44.141658 ignition[811]: Ignition finished successfully Dec 16 12:32:44.144401 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:32:44.148411 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:32:44.184480 ignition[819]: Ignition 2.22.0 Dec 16 12:32:44.184496 ignition[819]: Stage: disks Dec 16 12:32:44.184640 ignition[819]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:44.187583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:32:44.184649 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:44.188722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:32:44.185457 ignition[819]: disks: disks passed Dec 16 12:32:44.190138 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:32:44.185505 ignition[819]: Ignition finished successfully Dec 16 12:32:44.191951 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:32:44.193663 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:32:44.194932 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:32:44.197783 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:32:44.223283 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:32:44.230193 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:32:44.232693 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:32:44.304142 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:32:44.304194 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:32:44.305394 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:32:44.307835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:32:44.309605 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:32:44.310789 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:32:44.310856 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:32:44.310885 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:32:44.328440 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:32:44.331218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:32:44.338030 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Dec 16 12:32:44.338077 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:44.338093 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:44.342133 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:44.342182 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:44.343793 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:32:44.370776 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:32:44.374027 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:32:44.378243 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:32:44.381513 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:32:44.455268 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:32:44.457226 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:32:44.458764 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:32:44.482137 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:44.498284 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:32:44.514694 ignition[951]: INFO : Ignition 2.22.0 Dec 16 12:32:44.514694 ignition[951]: INFO : Stage: mount Dec 16 12:32:44.516613 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:44.516613 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:44.516613 ignition[951]: INFO : mount: mount passed Dec 16 12:32:44.516613 ignition[951]: INFO : Ignition finished successfully Dec 16 12:32:44.517279 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:32:44.521230 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:32:44.882171 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:32:44.883699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:32:44.905136 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Dec 16 12:32:44.907720 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:32:44.907805 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:32:44.912275 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:32:44.912353 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:32:44.914346 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:32:44.952870 ignition[981]: INFO : Ignition 2.22.0 Dec 16 12:32:44.952870 ignition[981]: INFO : Stage: files Dec 16 12:32:44.954438 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:44.954438 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:44.954438 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:32:44.959139 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:32:44.959139 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:32:44.959139 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:32:44.959139 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:32:44.959139 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:32:44.958034 unknown[981]: wrote ssh authorized keys file for user: core Dec 16 12:32:44.966648 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:32:44.966648 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:32:45.015135 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:32:45.141335 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:32:45.141335 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:32:45.144890 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:32:45.156601 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:32:45.517477 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:32:45.724437 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:32:45.724437 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:32:45.728958 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:32:45.745982 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:32:45.750962 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:32:45.750962 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:32:45.750962 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:32:45.750962 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:32:45.750962 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:32:45.750962 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:32:45.750962 ignition[981]: INFO : files: files passed Dec 16 12:32:45.750962 ignition[981]: INFO : Ignition finished successfully Dec 16 12:32:45.751699 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:32:45.754499 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:32:45.756961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:32:45.778567 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:32:45.780698 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:32:45.781890 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:45.781890 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:45.785555 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:32:45.782534 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:32:45.785689 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:32:45.789703 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:32:45.792238 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:32:45.843267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:32:45.844198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:32:45.845528 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:32:45.847122 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:32:45.848908 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:32:45.850054 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:32:45.880998 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:32:45.883463 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:32:45.909544 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:45.910716 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:45.913009 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:32:45.915057 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:32:45.915209 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:32:45.917639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:32:45.919434 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:32:45.920904 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:32:45.922507 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:32:45.924292 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:32:45.926129 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:32:45.927968 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:32:45.930160 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:32:45.932925 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:32:45.935428 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:32:45.937210 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:32:45.938957 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:32:45.939101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:32:45.941577 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:45.942561 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:45.944269 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:32:45.945177 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:45.947277 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:32:45.947421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:32:45.950208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:32:45.950411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:32:45.952208 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:32:45.953637 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:32:45.953784 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:45.955578 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:32:45.957177 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:32:45.958509 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:32:45.958640 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:32:45.960167 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:32:45.960291 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:32:45.962203 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:32:45.962402 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:32:45.963972 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:32:45.964137 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:32:45.966420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:32:45.968604 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:32:45.970000 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:32:45.970216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:45.972057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:32:45.972230 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:32:45.979469 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:32:45.984163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:32:45.991896 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:32:45.994217 systemd-networkd[803]: eth0: Gained IPv6LL Dec 16 12:32:45.996484 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:32:45.996588 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:32:46.004622 ignition[1036]: INFO : Ignition 2.22.0 Dec 16 12:32:46.004622 ignition[1036]: INFO : Stage: umount Dec 16 12:32:46.006265 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:32:46.006265 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:32:46.006265 ignition[1036]: INFO : umount: umount passed Dec 16 12:32:46.006265 ignition[1036]: INFO : Ignition finished successfully Dec 16 12:32:46.008941 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:32:46.009055 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:32:46.010246 systemd[1]: Stopped target network.target - Network. Dec 16 12:32:46.011419 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:32:46.011486 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:32:46.013168 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:32:46.013248 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:32:46.014863 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:32:46.014910 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:32:46.016725 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:32:46.016819 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:32:46.018131 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:32:46.018184 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:32:46.019921 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:32:46.021353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:32:46.033186 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:32:46.034746 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:32:46.039013 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:32:46.040427 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:32:46.040533 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:32:46.049021 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:32:46.049702 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:32:46.052252 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:32:46.052291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:46.058554 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:32:46.060341 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:32:46.060412 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:32:46.062261 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:32:46.062327 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:46.065025 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:32:46.065072 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:46.066831 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:32:46.066876 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:46.069367 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:46.073821 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:32:46.073888 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:32:46.089992 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:32:46.091206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:46.092518 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:32:46.092611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:32:46.094552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:32:46.094622 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:46.095843 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:32:46.095877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:46.097509 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:32:46.097570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:32:46.099813 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:32:46.099866 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:32:46.102173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:32:46.102229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:32:46.105692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:32:46.107275 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:32:46.107349 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:46.110215 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:32:46.110266 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:46.113304 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:32:46.113368 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:46.116190 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:32:46.116236 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:46.117305 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:32:46.117361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:46.120908 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:32:46.120960 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:32:46.120988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:32:46.121020 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:32:46.121370 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:32:46.121481 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:32:46.123539 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:32:46.125664 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:32:46.148352 systemd[1]: Switching root. Dec 16 12:32:46.193526 systemd-journald[244]: Journal stopped Dec 16 12:32:46.949513 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 16 12:32:46.949565 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:32:46.949577 kernel: SELinux: policy capability open_perms=1 Dec 16 12:32:46.949590 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:32:46.949607 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:32:46.949616 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:32:46.949625 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:32:46.949638 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:32:46.949647 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:32:46.949664 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:32:46.949674 kernel: audit: type=1403 audit(1765888366.376:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:32:46.949684 systemd[1]: Successfully loaded SELinux policy in 61.603ms. Dec 16 12:32:46.949702 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.412ms. Dec 16 12:32:46.949714 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:32:46.949725 systemd[1]: Detected virtualization kvm. Dec 16 12:32:46.949736 systemd[1]: Detected architecture arm64. Dec 16 12:32:46.949745 systemd[1]: Detected first boot. Dec 16 12:32:46.949758 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:32:46.949769 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:32:46.949779 zram_generator::config[1083]: No configuration found. Dec 16 12:32:46.949790 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:32:46.949803 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:32:46.949813 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:32:46.949822 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:32:46.949833 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:32:46.949843 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:32:46.949855 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:32:46.949865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:32:46.949875 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:32:46.949885 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:32:46.949895 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:32:46.949905 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:32:46.949916 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:32:46.949926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:32:46.949937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:32:46.949949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:32:46.949959 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:32:46.949970 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:32:46.949981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:32:46.949991 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:32:46.950002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:32:46.950012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:32:46.950024 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:32:46.950034 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:32:46.950045 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:32:46.950055 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:32:46.950065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:32:46.950076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:32:46.950086 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:32:46.950097 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:32:46.950114 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:32:46.950126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:32:46.950135 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:32:46.950146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:32:46.950157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:32:46.950168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:32:46.950178 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:32:46.950188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:32:46.950198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:32:46.950208 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:32:46.950221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:32:46.950231 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:32:46.950242 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:32:46.950253 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:32:46.950264 systemd[1]: Reached target machines.target - Containers. Dec 16 12:32:46.950274 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:32:46.950285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:46.950296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:32:46.950321 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:32:46.950340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:46.950351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:32:46.950363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:46.950373 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:32:46.950383 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:46.950394 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:32:46.950405 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:32:46.950415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:32:46.950427 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:32:46.950438 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:32:46.950449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:46.950459 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:32:46.950469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:32:46.950480 kernel: loop: module loaded Dec 16 12:32:46.950490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:32:46.950501 kernel: ACPI: bus type drm_connector registered Dec 16 12:32:46.950510 kernel: fuse: init (API version 7.41) Dec 16 12:32:46.950522 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:32:46.950534 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:32:46.950544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:32:46.950555 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:32:46.950565 systemd[1]: Stopped verity-setup.service. Dec 16 12:32:46.950612 systemd-journald[1154]: Collecting audit messages is disabled. Dec 16 12:32:46.950634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:32:46.950645 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:32:46.950656 systemd-journald[1154]: Journal started Dec 16 12:32:46.950680 systemd-journald[1154]: Runtime Journal (/run/log/journal/1361a39622004d989cfeaaa0ce512577) is 6M, max 48.5M, 42.4M free. Dec 16 12:32:46.732262 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:32:46.751179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:32:46.751591 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:32:46.953731 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:32:46.954407 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:32:46.955443 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:32:46.956465 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:32:46.957492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:32:46.959199 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:32:46.960549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:32:46.961968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:32:46.962158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:32:46.963371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:46.963526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:46.964773 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:32:46.966160 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:32:46.967316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:46.967477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:46.968898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:32:46.969073 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:32:46.970316 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:46.970479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:46.971788 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:32:46.973061 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:32:46.974601 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:32:46.976485 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:32:46.988271 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:32:46.990453 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:32:46.992331 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:32:46.993297 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:32:46.993340 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:32:46.995059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:32:47.001320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:32:47.002370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:47.003557 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:32:47.006176 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:32:47.007294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:32:47.010360 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:32:47.011839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:32:47.015533 systemd-journald[1154]: Time spent on flushing to /var/log/journal/1361a39622004d989cfeaaa0ce512577 is 15.887ms for 887 entries. Dec 16 12:32:47.015533 systemd-journald[1154]: System Journal (/var/log/journal/1361a39622004d989cfeaaa0ce512577) is 8M, max 195.6M, 187.6M free. Dec 16 12:32:47.050673 systemd-journald[1154]: Received client request to flush runtime journal. Dec 16 12:32:47.050730 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:32:47.014263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:32:47.016532 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:32:47.019375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:32:47.021995 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:32:47.025463 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:32:47.026681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:32:47.053021 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:32:47.063153 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:32:47.057224 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Dec 16 12:32:47.057237 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Dec 16 12:32:47.057668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:32:47.059946 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:32:47.063040 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:32:47.065697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:32:47.071397 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:32:47.075574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:32:47.084152 kernel: loop1: detected capacity change from 0 to 119840 Dec 16 12:32:47.097163 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:32:47.106131 kernel: loop2: detected capacity change from 0 to 207008 Dec 16 12:32:47.122540 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:32:47.127378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:32:47.136366 kernel: loop3: detected capacity change from 0 to 100632 Dec 16 12:32:47.144144 kernel: loop4: detected capacity change from 0 to 119840 Dec 16 12:32:47.148477 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Dec 16 12:32:47.148494 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Dec 16 12:32:47.150138 kernel: loop5: detected capacity change from 0 to 207008 Dec 16 12:32:47.152595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:32:47.158688 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:32:47.159080 (sd-merge)[1222]: Merged extensions into '/usr'. Dec 16 12:32:47.165808 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:32:47.165830 systemd[1]: Reloading... Dec 16 12:32:47.226139 zram_generator::config[1250]: No configuration found. Dec 16 12:32:47.295174 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:32:47.372361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:32:47.372446 systemd[1]: Reloading finished in 206 ms. Dec 16 12:32:47.389861 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:32:47.392176 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:32:47.404314 systemd[1]: Starting ensure-sysext.service... Dec 16 12:32:47.406067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:32:47.429494 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:32:47.429513 systemd[1]: Reloading... Dec 16 12:32:47.432865 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:32:47.432896 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:32:47.433176 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:32:47.433388 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:32:47.434014 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:32:47.434251 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:32:47.434298 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 16 12:32:47.437271 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:32:47.437280 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:32:47.443912 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:32:47.443927 systemd-tmpfiles[1286]: Skipping /boot Dec 16 12:32:47.479139 zram_generator::config[1313]: No configuration found. Dec 16 12:32:47.612732 systemd[1]: Reloading finished in 182 ms. Dec 16 12:32:47.636026 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:32:47.641941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:32:47.652232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:47.654809 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:32:47.657278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:32:47.660294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:32:47.664334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:32:47.666843 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:32:47.672449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:47.680693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:47.683096 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:47.687367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:47.688687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:47.688829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:47.702331 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:32:47.704705 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:32:47.706433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:47.706682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:47.706804 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 16 12:32:47.708433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:47.708624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:47.710543 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:47.710720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:47.714716 augenrules[1378]: No rules Dec 16 12:32:47.715995 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:47.716499 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:47.722262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:47.723940 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:47.726767 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:47.733765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:47.735139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:47.735349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:47.738264 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:32:47.742465 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:32:47.744840 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:32:47.746609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:47.746796 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:47.748425 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:47.749229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:47.750780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:47.750936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:47.759481 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:32:47.775078 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:47.777416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:32:47.780267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:32:47.783370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:32:47.786439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:32:47.792413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:32:47.793530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:32:47.793582 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:32:47.795836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:32:47.797452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:32:47.797960 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:32:47.800407 systemd[1]: Finished ensure-sysext.service. Dec 16 12:32:47.806552 augenrules[1427]: /sbin/augenrules: No change Dec 16 12:32:47.809852 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:32:47.813876 augenrules[1452]: No rules Dec 16 12:32:47.818483 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:47.818839 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:47.821683 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:32:47.821860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:32:47.823998 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:32:47.824552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:32:47.826103 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:32:47.826506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:32:47.828055 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:32:47.828219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:32:47.841800 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:32:47.845721 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:32:47.845793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:32:47.856311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:32:47.882719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:32:47.886239 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:32:47.913836 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:32:47.962791 systemd-resolved[1352]: Positive Trust Anchors: Dec 16 12:32:47.962809 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:32:47.962846 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:32:47.965330 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:32:47.966178 systemd-networkd[1440]: lo: Link UP Dec 16 12:32:47.966182 systemd-networkd[1440]: lo: Gained carrier Dec 16 12:32:47.967042 systemd-networkd[1440]: Enumeration completed Dec 16 12:32:47.967217 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:32:47.967623 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:47.967638 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:32:47.968322 systemd-networkd[1440]: eth0: Link UP Dec 16 12:32:47.968543 systemd-networkd[1440]: eth0: Gained carrier Dec 16 12:32:47.968561 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:32:47.968831 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:32:47.971936 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:32:47.972560 systemd-resolved[1352]: Defaulting to hostname 'linux'. Dec 16 12:32:47.976345 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:32:47.977431 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:32:47.979368 systemd[1]: Reached target network.target - Network. Dec 16 12:32:47.980086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:32:47.981180 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:32:47.982093 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:32:47.983080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:32:47.985567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:32:47.986872 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:32:47.988516 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:32:47.990263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:32:47.990312 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:32:47.991136 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:32:47.992619 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:32:47.997632 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:32:48.000237 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:32:48.001431 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:32:48.003270 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:32:48.006814 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:32:48.007221 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:32:48.008051 systemd-timesyncd[1470]: Network configuration changed, trying to establish connection. Dec 16 12:32:48.008512 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:32:47.560645 systemd-journald[1154]: Time jumped backwards, rotating. Dec 16 12:32:47.554763 systemd-resolved[1352]: Clock change detected. Flushing caches. Dec 16 12:32:47.554801 systemd-timesyncd[1470]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:32:47.554857 systemd-timesyncd[1470]: Initial clock synchronization to Tue 2025-12-16 12:32:47.554719 UTC. Dec 16 12:32:47.557721 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:32:47.559793 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:32:47.561300 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:32:47.562677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:32:47.562715 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:32:47.566363 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:32:47.569109 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:32:47.572367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:32:47.574421 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:32:47.576649 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:32:47.577603 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:32:47.579034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:32:47.582337 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:32:47.583081 jq[1500]: false Dec 16 12:32:47.585186 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:32:47.587413 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:32:47.590921 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:32:47.593832 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:32:47.594383 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:32:47.594989 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:32:47.597233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:32:47.601172 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:32:47.606443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:32:47.608066 jq[1510]: true Dec 16 12:32:47.608580 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:32:47.608764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:32:47.610422 extend-filesystems[1501]: Found /dev/vda6 Dec 16 12:32:47.612580 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:32:47.612757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:32:47.615015 extend-filesystems[1501]: Found /dev/vda9 Dec 16 12:32:47.619404 extend-filesystems[1501]: Checking size of /dev/vda9 Dec 16 12:32:47.628278 jq[1518]: true Dec 16 12:32:47.631382 update_engine[1508]: I20251216 12:32:47.631064 1508 main.cc:92] Flatcar Update Engine starting Dec 16 12:32:47.646621 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:32:47.649435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:32:47.655084 dbus-daemon[1498]: [system] SELinux support is enabled Dec 16 12:32:47.655461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:32:47.659395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:32:47.659430 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:32:47.660742 tar[1517]: linux-arm64/LICENSE Dec 16 12:32:47.661604 tar[1517]: linux-arm64/helm Dec 16 12:32:47.661504 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:32:47.661522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:32:47.664922 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:32:47.665377 update_engine[1508]: I20251216 12:32:47.665102 1508 update_check_scheduler.cc:74] Next update check in 6m40s Dec 16 12:32:47.666960 extend-filesystems[1501]: Resized partition /dev/vda9 Dec 16 12:32:47.668029 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:32:47.677036 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:32:47.677255 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:32:47.678553 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:32:47.734460 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:32:47.753944 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:32:47.843329 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:32:47.843850 systemd-logind[1507]: New seat seat0. Dec 16 12:32:47.844695 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:32:47.951168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:32:47.960169 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:32:47.999164 containerd[1534]: time="2025-12-16T12:32:47Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:32:48.172556 containerd[1534]: time="2025-12-16T12:32:48.172380991Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:32:48.172625 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:32:48.172625 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:32:48.172625 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:32:48.176091 extend-filesystems[1501]: Resized filesystem in /dev/vda9 Dec 16 12:32:48.174190 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:32:48.176178 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:32:48.191051 containerd[1534]: time="2025-12-16T12:32:48.190988111Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.48µs" Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191163751Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191198311Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191368191Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191386951Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191412831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191462431Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191473311Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191693551Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191707031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191717151Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191724631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192249 containerd[1534]: time="2025-12-16T12:32:48.191789471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.191984871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.192011071Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.192020951Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.192055991Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.192283631Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:32:48.192475 containerd[1534]: time="2025-12-16T12:32:48.192389151Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:32:48.213028 tar[1517]: linux-arm64/README.md Dec 16 12:32:48.232949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:32:48.244393 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:32:48.245208 containerd[1534]: time="2025-12-16T12:32:48.245173071Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:32:48.245362 containerd[1534]: time="2025-12-16T12:32:48.245338831Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:32:48.245454 containerd[1534]: time="2025-12-16T12:32:48.245439151Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:32:48.245570 containerd[1534]: time="2025-12-16T12:32:48.245496631Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:32:48.245570 containerd[1534]: time="2025-12-16T12:32:48.245512351Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:32:48.245570 containerd[1534]: time="2025-12-16T12:32:48.245523471Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:32:48.245570 containerd[1534]: time="2025-12-16T12:32:48.245535431Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:32:48.245570 containerd[1534]: time="2025-12-16T12:32:48.245547631Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:32:48.245684 containerd[1534]: time="2025-12-16T12:32:48.245671391Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:32:48.245816 containerd[1534]: time="2025-12-16T12:32:48.245719511Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:32:48.245816 containerd[1534]: time="2025-12-16T12:32:48.245749431Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:32:48.245816 containerd[1534]: time="2025-12-16T12:32:48.245768151Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:32:48.246008 containerd[1534]: time="2025-12-16T12:32:48.245992511Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:32:48.246014 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:32:48.246220 containerd[1534]: time="2025-12-16T12:32:48.246125391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:32:48.246220 containerd[1534]: time="2025-12-16T12:32:48.246169911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:32:48.246220 containerd[1534]: time="2025-12-16T12:32:48.246181831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:32:48.246220 containerd[1534]: time="2025-12-16T12:32:48.246191991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:32:48.246220 containerd[1534]: time="2025-12-16T12:32:48.246201831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:32:48.246395 containerd[1534]: time="2025-12-16T12:32:48.246332591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:32:48.246395 containerd[1534]: time="2025-12-16T12:32:48.246353031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:32:48.246395 containerd[1534]: time="2025-12-16T12:32:48.246367551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:32:48.246395 containerd[1534]: time="2025-12-16T12:32:48.246377911Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:32:48.246592 containerd[1534]: time="2025-12-16T12:32:48.246481071Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:32:48.246797 containerd[1534]: time="2025-12-16T12:32:48.246729351Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:32:48.246797 containerd[1534]: time="2025-12-16T12:32:48.246751431Z" level=info msg="Start snapshots syncer" Dec 16 12:32:48.246797 containerd[1534]: time="2025-12-16T12:32:48.246775631Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:32:48.247214 containerd[1534]: time="2025-12-16T12:32:48.247147271Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:32:48.247425 containerd[1534]: time="2025-12-16T12:32:48.247347951Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:32:48.247567 containerd[1534]: time="2025-12-16T12:32:48.247489671Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:32:48.247720 containerd[1534]: time="2025-12-16T12:32:48.247704071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:32:48.247795 containerd[1534]: time="2025-12-16T12:32:48.247784111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:32:48.247921 containerd[1534]: time="2025-12-16T12:32:48.247856311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:32:48.247921 containerd[1534]: time="2025-12-16T12:32:48.247874151Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:32:48.247921 containerd[1534]: time="2025-12-16T12:32:48.247887751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:32:48.247921 containerd[1534]: time="2025-12-16T12:32:48.247898631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:32:48.247921 containerd[1534]: time="2025-12-16T12:32:48.247909911Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:32:48.248102 containerd[1534]: time="2025-12-16T12:32:48.248049111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:32:48.248102 containerd[1534]: time="2025-12-16T12:32:48.248075311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:32:48.248179 containerd[1534]: time="2025-12-16T12:32:48.248088831Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:32:48.248336 containerd[1534]: time="2025-12-16T12:32:48.248266631Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:32:48.248336 containerd[1534]: time="2025-12-16T12:32:48.248288111Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:32:48.248336 containerd[1534]: time="2025-12-16T12:32:48.248297831Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:32:48.248336 containerd[1534]: time="2025-12-16T12:32:48.248306831Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:32:48.248401 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:32:48.248607 containerd[1534]: time="2025-12-16T12:32:48.248314911Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:32:48.248607 containerd[1534]: time="2025-12-16T12:32:48.248498951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:32:48.248607 containerd[1534]: time="2025-12-16T12:32:48.248512551Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:32:48.248760 containerd[1534]: time="2025-12-16T12:32:48.248687311Z" level=info msg="runtime interface created" Dec 16 12:32:48.248760 containerd[1534]: time="2025-12-16T12:32:48.248698871Z" level=info msg="created NRI interface" Dec 16 12:32:48.248760 containerd[1534]: time="2025-12-16T12:32:48.248715951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:32:48.248760 containerd[1534]: time="2025-12-16T12:32:48.248731271Z" level=info msg="Connect containerd service" Dec 16 12:32:48.248914 containerd[1534]: time="2025-12-16T12:32:48.248848591Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:32:48.249821 containerd[1534]: time="2025-12-16T12:32:48.249748671Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320535951Z" level=info msg="Start subscribing containerd event" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320608511Z" level=info msg="Start recovering state" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320697511Z" level=info msg="Start event monitor" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320710831Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320720791Z" level=info msg="Start streaming server" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320729591Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320736151Z" level=info msg="runtime interface starting up..." Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320741431Z" level=info msg="starting plugins..." Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320797751Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320812751Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.320858271Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:32:48.321156 containerd[1534]: time="2025-12-16T12:32:48.321032871Z" level=info msg="containerd successfully booted in 0.322251s" Dec 16 12:32:48.321168 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:32:48.687838 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:32:48.708824 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:32:48.711526 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:32:48.745053 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:32:48.745311 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:32:48.748820 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:32:48.784742 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:32:48.788018 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:32:48.790381 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:32:48.792112 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:32:49.314321 systemd-networkd[1440]: eth0: Gained IPv6LL Dec 16 12:32:49.316679 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:32:49.318458 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:32:49.320950 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:32:49.323585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:32:49.335647 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:32:49.363088 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:32:49.365675 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:32:49.365899 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:32:49.368581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:32:49.908198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:32:49.909658 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:32:49.913753 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:32:49.916256 systemd[1]: Startup finished in 2.056s (kernel) + 4.760s (initrd) + 4.056s (userspace) = 10.873s. Dec 16 12:32:50.292253 kubelet[1639]: E1216 12:32:50.292114 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:32:50.300422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:32:50.300582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:32:50.300993 systemd[1]: kubelet.service: Consumed 774ms CPU time, 257.5M memory peak. Dec 16 12:32:54.348354 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:32:54.349752 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:35728.service - OpenSSH per-connection server daemon (10.0.0.1:35728). Dec 16 12:32:54.456725 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 35728 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:54.458733 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:54.465622 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:32:54.466599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:32:54.472480 systemd-logind[1507]: New session 1 of user core. Dec 16 12:32:54.495570 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:32:54.498106 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:32:54.522448 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:32:54.524776 systemd-logind[1507]: New session c1 of user core. Dec 16 12:32:54.650957 systemd[1657]: Queued start job for default target default.target. Dec 16 12:32:54.671480 systemd[1657]: Created slice app.slice - User Application Slice. Dec 16 12:32:54.671514 systemd[1657]: Reached target paths.target - Paths. Dec 16 12:32:54.671554 systemd[1657]: Reached target timers.target - Timers. Dec 16 12:32:54.673326 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:32:54.684711 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:32:54.684834 systemd[1657]: Reached target sockets.target - Sockets. Dec 16 12:32:54.684875 systemd[1657]: Reached target basic.target - Basic System. Dec 16 12:32:54.684902 systemd[1657]: Reached target default.target - Main User Target. Dec 16 12:32:54.684928 systemd[1657]: Startup finished in 152ms. Dec 16 12:32:54.685099 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:32:54.686419 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:32:54.745392 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Dec 16 12:32:54.807351 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:54.808792 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:54.813216 systemd-logind[1507]: New session 2 of user core. Dec 16 12:32:54.826347 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:32:54.878259 sshd[1671]: Connection closed by 10.0.0.1 port 35742 Dec 16 12:32:54.878831 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:54.889448 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:35742.service: Deactivated successfully. Dec 16 12:32:54.892646 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:32:54.894037 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:32:54.896301 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Dec 16 12:32:54.897281 systemd-logind[1507]: Removed session 2. Dec 16 12:32:54.967941 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:54.970153 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:54.976917 systemd-logind[1507]: New session 3 of user core. Dec 16 12:32:54.990381 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:32:55.040028 sshd[1681]: Connection closed by 10.0.0.1 port 35750 Dec 16 12:32:55.041813 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:55.066769 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:35750.service: Deactivated successfully. Dec 16 12:32:55.069714 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:32:55.070938 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:32:55.073286 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:35764.service - OpenSSH per-connection server daemon (10.0.0.1:35764). Dec 16 12:32:55.074268 systemd-logind[1507]: Removed session 3. Dec 16 12:32:55.149651 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 35764 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:55.151528 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:55.156239 systemd-logind[1507]: New session 4 of user core. Dec 16 12:32:55.167329 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:32:55.220012 sshd[1690]: Connection closed by 10.0.0.1 port 35764 Dec 16 12:32:55.220470 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:55.238369 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:35764.service: Deactivated successfully. Dec 16 12:32:55.241642 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:32:55.244732 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:32:55.247070 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:35778.service - OpenSSH per-connection server daemon (10.0.0.1:35778). Dec 16 12:32:55.247810 systemd-logind[1507]: Removed session 4. Dec 16 12:32:55.319690 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 35778 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:55.321228 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:55.326929 systemd-logind[1507]: New session 5 of user core. Dec 16 12:32:55.333396 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:32:55.390182 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:32:55.390459 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:55.410173 sudo[1700]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:55.411786 sshd[1699]: Connection closed by 10.0.0.1 port 35778 Dec 16 12:32:55.412347 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:55.427498 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:35778.service: Deactivated successfully. Dec 16 12:32:55.430934 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:32:55.432498 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:32:55.435332 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:35784.service - OpenSSH per-connection server daemon (10.0.0.1:35784). Dec 16 12:32:55.436791 systemd-logind[1507]: Removed session 5. Dec 16 12:32:55.518189 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 35784 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:55.519553 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:55.523927 systemd-logind[1507]: New session 6 of user core. Dec 16 12:32:55.540378 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:32:55.592828 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:32:55.593168 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:55.676644 sudo[1711]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:55.681930 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:32:55.682203 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:55.693464 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:32:55.743730 augenrules[1733]: No rules Dec 16 12:32:55.745183 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:32:55.745539 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:32:55.747823 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 16 12:32:55.749610 sshd[1709]: Connection closed by 10.0.0.1 port 35784 Dec 16 12:32:55.749487 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:55.761480 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:35784.service: Deactivated successfully. Dec 16 12:32:55.763027 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:32:55.763765 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:32:55.766152 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:35796.service - OpenSSH per-connection server daemon (10.0.0.1:35796). Dec 16 12:32:55.767084 systemd-logind[1507]: Removed session 6. Dec 16 12:32:55.829618 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 35796 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:32:55.830973 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:55.834943 systemd-logind[1507]: New session 7 of user core. Dec 16 12:32:55.843335 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:32:55.895494 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:32:55.896278 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:32:56.199923 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:32:56.218542 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:32:56.438106 dockerd[1767]: time="2025-12-16T12:32:56.438019231Z" level=info msg="Starting up" Dec 16 12:32:56.440917 dockerd[1767]: time="2025-12-16T12:32:56.440865231Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:32:56.452676 dockerd[1767]: time="2025-12-16T12:32:56.452593551Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:32:56.483968 dockerd[1767]: time="2025-12-16T12:32:56.483899471Z" level=info msg="Loading containers: start." Dec 16 12:32:56.493402 kernel: Initializing XFRM netlink socket Dec 16 12:32:56.705246 systemd-networkd[1440]: docker0: Link UP Dec 16 12:32:56.709896 dockerd[1767]: time="2025-12-16T12:32:56.709858151Z" level=info msg="Loading containers: done." Dec 16 12:32:56.721341 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3637690138-merged.mount: Deactivated successfully. Dec 16 12:32:56.724840 dockerd[1767]: time="2025-12-16T12:32:56.724522711Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:32:56.724840 dockerd[1767]: time="2025-12-16T12:32:56.724603751Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:32:56.724840 dockerd[1767]: time="2025-12-16T12:32:56.724681391Z" level=info msg="Initializing buildkit" Dec 16 12:32:56.747255 dockerd[1767]: time="2025-12-16T12:32:56.747214911Z" level=info msg="Completed buildkit initialization" Dec 16 12:32:56.752216 dockerd[1767]: time="2025-12-16T12:32:56.752182791Z" level=info msg="Daemon has completed initialization" Dec 16 12:32:56.752531 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:32:56.752634 dockerd[1767]: time="2025-12-16T12:32:56.752460871Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:32:57.266608 containerd[1534]: time="2025-12-16T12:32:57.266541871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:32:57.795006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377967456.mount: Deactivated successfully. Dec 16 12:32:59.008788 containerd[1534]: time="2025-12-16T12:32:59.008724311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:59.009344 containerd[1534]: time="2025-12-16T12:32:59.009301191Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 16 12:32:59.010482 containerd[1534]: time="2025-12-16T12:32:59.010447071Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:59.013268 containerd[1534]: time="2025-12-16T12:32:59.013241911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:32:59.014201 containerd[1534]: time="2025-12-16T12:32:59.014178031Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.74758768s" Dec 16 12:32:59.014245 containerd[1534]: time="2025-12-16T12:32:59.014208631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:32:59.014798 containerd[1534]: time="2025-12-16T12:32:59.014775911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:33:00.108733 containerd[1534]: time="2025-12-16T12:33:00.108667871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:00.109615 containerd[1534]: time="2025-12-16T12:33:00.109577311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 16 12:33:00.110045 containerd[1534]: time="2025-12-16T12:33:00.109998911Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:00.113399 containerd[1534]: time="2025-12-16T12:33:00.113359271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:00.115051 containerd[1534]: time="2025-12-16T12:33:00.115021831Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.10022064s" Dec 16 12:33:00.115101 containerd[1534]: time="2025-12-16T12:33:00.115057911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:33:00.115533 containerd[1534]: time="2025-12-16T12:33:00.115513831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:33:00.546603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:33:00.547979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:00.714479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:00.722567 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:33:00.768988 kubelet[2055]: E1216 12:33:00.768944 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:33:00.772826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:33:00.772960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:33:00.773320 systemd[1]: kubelet.service: Consumed 153ms CPU time, 107.9M memory peak. Dec 16 12:33:01.356194 containerd[1534]: time="2025-12-16T12:33:01.355190111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:01.356194 containerd[1534]: time="2025-12-16T12:33:01.355940551Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 16 12:33:01.357250 containerd[1534]: time="2025-12-16T12:33:01.357205391Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:01.360504 containerd[1534]: time="2025-12-16T12:33:01.360464031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:01.361805 containerd[1534]: time="2025-12-16T12:33:01.361409191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.24586468s" Dec 16 12:33:01.361805 containerd[1534]: time="2025-12-16T12:33:01.361444711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:33:01.361907 containerd[1534]: time="2025-12-16T12:33:01.361827751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:33:02.309280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357927335.mount: Deactivated successfully. Dec 16 12:33:02.580229 containerd[1534]: time="2025-12-16T12:33:02.580109071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.581451 containerd[1534]: time="2025-12-16T12:33:02.581413191Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 16 12:33:02.582678 containerd[1534]: time="2025-12-16T12:33:02.582472871Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.584520 containerd[1534]: time="2025-12-16T12:33:02.584487191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:02.584934 containerd[1534]: time="2025-12-16T12:33:02.584910111Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.22305688s" Dec 16 12:33:02.584992 containerd[1534]: time="2025-12-16T12:33:02.584937471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:33:02.585767 containerd[1534]: time="2025-12-16T12:33:02.585574311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:33:03.137572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591248473.mount: Deactivated successfully. Dec 16 12:33:04.020015 containerd[1534]: time="2025-12-16T12:33:04.019953151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.020563 containerd[1534]: time="2025-12-16T12:33:04.020534831Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 16 12:33:04.023285 containerd[1534]: time="2025-12-16T12:33:04.023227031Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.026454 containerd[1534]: time="2025-12-16T12:33:04.026412671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:04.027640 containerd[1534]: time="2025-12-16T12:33:04.027597031Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.44199088s" Dec 16 12:33:04.027640 containerd[1534]: time="2025-12-16T12:33:04.027627911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:33:04.028196 containerd[1534]: time="2025-12-16T12:33:04.028033231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:33:04.443370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703300151.mount: Deactivated successfully. Dec 16 12:33:04.450193 containerd[1534]: time="2025-12-16T12:33:04.450146191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:04.451380 containerd[1534]: time="2025-12-16T12:33:04.451350151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:33:04.452433 containerd[1534]: time="2025-12-16T12:33:04.452406351Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:04.454527 containerd[1534]: time="2025-12-16T12:33:04.454470591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:33:04.455240 containerd[1534]: time="2025-12-16T12:33:04.455070151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 426.86296ms" Dec 16 12:33:04.455240 containerd[1534]: time="2025-12-16T12:33:04.455102991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:33:04.455758 containerd[1534]: time="2025-12-16T12:33:04.455729551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:33:04.990976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083680717.mount: Deactivated successfully. Dec 16 12:33:06.912148 containerd[1534]: time="2025-12-16T12:33:06.911752791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:06.914780 containerd[1534]: time="2025-12-16T12:33:06.914667391Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 16 12:33:06.917153 containerd[1534]: time="2025-12-16T12:33:06.916856671Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:06.920278 containerd[1534]: time="2025-12-16T12:33:06.920184271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:06.922595 containerd[1534]: time="2025-12-16T12:33:06.922554391Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.46679312s" Dec 16 12:33:06.922745 containerd[1534]: time="2025-12-16T12:33:06.922704671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:33:10.949740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:33:10.953330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:11.104647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:11.128520 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:33:11.165181 kubelet[2216]: E1216 12:33:11.165094 2216 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:33:11.168963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:33:11.169207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:33:11.169741 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107M memory peak. Dec 16 12:33:12.919416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:12.919835 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107M memory peak. Dec 16 12:33:12.921632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:12.944440 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Dec 16 12:33:12.944456 systemd[1]: Reloading... Dec 16 12:33:13.006204 zram_generator::config[2276]: No configuration found. Dec 16 12:33:13.154956 systemd[1]: Reloading finished in 210 ms. Dec 16 12:33:13.225782 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:33:13.225864 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:33:13.226101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:13.226169 systemd[1]: kubelet.service: Consumed 92ms CPU time, 95.2M memory peak. Dec 16 12:33:13.227607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:13.350930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:13.356909 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:33:13.399607 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:13.399607 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:33:13.399607 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:13.399926 kubelet[2318]: I1216 12:33:13.399659 2318 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:33:14.233967 kubelet[2318]: I1216 12:33:14.233915 2318 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:33:14.233967 kubelet[2318]: I1216 12:33:14.233950 2318 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:33:14.234276 kubelet[2318]: I1216 12:33:14.234247 2318 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:33:14.258187 kubelet[2318]: E1216 12:33:14.258123 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:14.259769 kubelet[2318]: I1216 12:33:14.259622 2318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:33:14.270254 kubelet[2318]: I1216 12:33:14.270213 2318 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:33:14.273076 kubelet[2318]: I1216 12:33:14.273040 2318 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:33:14.273778 kubelet[2318]: I1216 12:33:14.273726 2318 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:33:14.273956 kubelet[2318]: I1216 12:33:14.273772 2318 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:33:14.274053 kubelet[2318]: I1216 12:33:14.274016 2318 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:33:14.274053 kubelet[2318]: I1216 12:33:14.274026 2318 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:33:14.274299 kubelet[2318]: I1216 12:33:14.274270 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:14.280067 kubelet[2318]: I1216 12:33:14.276719 2318 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:33:14.280067 kubelet[2318]: I1216 12:33:14.276750 2318 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:33:14.280067 kubelet[2318]: I1216 12:33:14.276777 2318 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:33:14.280067 kubelet[2318]: I1216 12:33:14.276791 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:33:14.280067 kubelet[2318]: W1216 12:33:14.278552 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:14.280067 kubelet[2318]: E1216 12:33:14.278603 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:14.280067 kubelet[2318]: W1216 12:33:14.279444 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:14.280067 kubelet[2318]: E1216 12:33:14.279484 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:14.280699 kubelet[2318]: I1216 12:33:14.280672 2318 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:33:14.281351 kubelet[2318]: I1216 12:33:14.281329 2318 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:33:14.281478 kubelet[2318]: W1216 12:33:14.281455 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:33:14.282670 kubelet[2318]: I1216 12:33:14.282639 2318 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:33:14.282670 kubelet[2318]: I1216 12:33:14.282673 2318 server.go:1287] "Started kubelet" Dec 16 12:33:14.282802 kubelet[2318]: I1216 12:33:14.282768 2318 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:33:14.283140 kubelet[2318]: I1216 12:33:14.283078 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:33:14.283377 kubelet[2318]: I1216 12:33:14.283347 2318 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:33:14.283812 kubelet[2318]: I1216 12:33:14.283793 2318 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:33:14.286877 kubelet[2318]: I1216 12:33:14.286656 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:33:14.286877 kubelet[2318]: I1216 12:33:14.286773 2318 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:33:14.286877 kubelet[2318]: I1216 12:33:14.286841 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:33:14.288200 kubelet[2318]: I1216 12:33:14.288167 2318 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:33:14.288257 kubelet[2318]: I1216 12:33:14.288235 2318 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:33:14.288682 kubelet[2318]: E1216 12:33:14.288444 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b2237633d99f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:33:14.282654111 +0000 UTC m=+0.921956961,LastTimestamp:2025-12-16 12:33:14.282654111 +0000 UTC m=+0.921956961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:33:14.288750 kubelet[2318]: W1216 12:33:14.288705 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:14.288750 kubelet[2318]: E1216 12:33:14.288744 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:14.288991 kubelet[2318]: E1216 12:33:14.288953 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Dec 16 12:33:14.289029 kubelet[2318]: E1216 12:33:14.288992 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:14.289437 kubelet[2318]: E1216 12:33:14.289400 2318 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:33:14.289513 kubelet[2318]: I1216 12:33:14.289488 2318 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:33:14.289641 kubelet[2318]: I1216 12:33:14.289568 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:33:14.291209 kubelet[2318]: I1216 12:33:14.290462 2318 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:33:14.302207 kubelet[2318]: I1216 12:33:14.302113 2318 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:33:14.302207 kubelet[2318]: I1216 12:33:14.302149 2318 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:33:14.302207 kubelet[2318]: I1216 12:33:14.302168 2318 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:14.389284 kubelet[2318]: E1216 12:33:14.389189 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:14.481300 kubelet[2318]: I1216 12:33:14.481242 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:33:14.484013 kubelet[2318]: I1216 12:33:14.483945 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:33:14.484173 kubelet[2318]: I1216 12:33:14.484096 2318 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:33:14.484251 kubelet[2318]: I1216 12:33:14.484240 2318 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:33:14.484294 kubelet[2318]: I1216 12:33:14.484287 2318 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:33:14.484392 kubelet[2318]: E1216 12:33:14.484370 2318 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:33:14.486673 kubelet[2318]: W1216 12:33:14.486630 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:14.486786 kubelet[2318]: E1216 12:33:14.486767 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:14.487458 kubelet[2318]: I1216 12:33:14.487440 2318 policy_none.go:49] "None policy: Start" Dec 16 12:33:14.487493 kubelet[2318]: I1216 12:33:14.487463 2318 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:33:14.487493 kubelet[2318]: I1216 12:33:14.487473 2318 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:33:14.489320 kubelet[2318]: E1216 12:33:14.489299 2318 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:14.489389 kubelet[2318]: E1216 12:33:14.489357 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Dec 16 12:33:14.516707 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:33:14.565846 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:33:14.569341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:33:14.577053 kubelet[2318]: I1216 12:33:14.577003 2318 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:33:14.577447 kubelet[2318]: I1216 12:33:14.577223 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:33:14.577447 kubelet[2318]: I1216 12:33:14.577240 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:33:14.577447 kubelet[2318]: I1216 12:33:14.577445 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:33:14.578382 kubelet[2318]: E1216 12:33:14.578361 2318 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:33:14.578481 kubelet[2318]: E1216 12:33:14.578469 2318 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:33:14.589511 kubelet[2318]: I1216 12:33:14.589481 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:14.589511 kubelet[2318]: I1216 12:33:14.589512 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:14.589611 kubelet[2318]: I1216 12:33:14.589530 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:14.589611 kubelet[2318]: I1216 12:33:14.589545 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:14.589611 kubelet[2318]: I1216 12:33:14.589563 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:14.589611 kubelet[2318]: I1216 12:33:14.589578 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:14.589611 kubelet[2318]: I1216 12:33:14.589593 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:14.589707 kubelet[2318]: I1216 12:33:14.589606 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:14.589707 kubelet[2318]: I1216 12:33:14.589626 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:14.593280 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 12:33:14.624772 kubelet[2318]: E1216 12:33:14.624734 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:14.627602 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 12:33:14.646262 kubelet[2318]: E1216 12:33:14.646229 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:14.648589 systemd[1]: Created slice kubepods-burstable-podc6110d4b9de6aaca527401514c90af4b.slice - libcontainer container kubepods-burstable-podc6110d4b9de6aaca527401514c90af4b.slice. Dec 16 12:33:14.650693 kubelet[2318]: E1216 12:33:14.650647 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:14.678775 kubelet[2318]: I1216 12:33:14.678739 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:14.679296 kubelet[2318]: E1216 12:33:14.679259 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Dec 16 12:33:14.880968 kubelet[2318]: I1216 12:33:14.880854 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:14.881226 kubelet[2318]: E1216 12:33:14.881195 2318 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Dec 16 12:33:14.889874 kubelet[2318]: E1216 12:33:14.889822 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Dec 16 12:33:14.926862 containerd[1534]: time="2025-12-16T12:33:14.926810311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:14.947570 containerd[1534]: time="2025-12-16T12:33:14.947530071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:14.950413 containerd[1534]: time="2025-12-16T12:33:14.950367471Z" level=info msg="connecting to shim c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7" address="unix:///run/containerd/s/33a6cdfac0362f66833c1e9eefdb6502db6572aa5f2cd42b41b7f3eae77cdd42" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:14.952158 containerd[1534]: time="2025-12-16T12:33:14.952115551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c6110d4b9de6aaca527401514c90af4b,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:14.981341 systemd[1]: Started cri-containerd-c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7.scope - libcontainer container c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7. Dec 16 12:33:15.019275 containerd[1534]: time="2025-12-16T12:33:15.019206671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7\"" Dec 16 12:33:15.023273 containerd[1534]: time="2025-12-16T12:33:15.023103271Z" level=info msg="CreateContainer within sandbox \"c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:33:15.024283 containerd[1534]: time="2025-12-16T12:33:15.024250471Z" level=info msg="connecting to shim d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8" address="unix:///run/containerd/s/972f336a7049d3f44222eefef9dca6b31df68557a11e3c2a3c02ff5736ea6ccb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:15.034610 containerd[1534]: time="2025-12-16T12:33:15.034564671Z" level=info msg="Container 97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:15.037675 containerd[1534]: time="2025-12-16T12:33:15.037635391Z" level=info msg="connecting to shim 36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4" address="unix:///run/containerd/s/667ff51e9d186af44ae81b86c43ce39f183d368d50e54569ffbd2b0117d3fc37" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:15.045035 containerd[1534]: time="2025-12-16T12:33:15.044989271Z" level=info msg="CreateContainer within sandbox \"c973d0d1b0293a9d317b7b07233e1364f2c8832a796f30dc27d8ba52dd40dba7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3\"" Dec 16 12:33:15.046283 containerd[1534]: time="2025-12-16T12:33:15.046057351Z" level=info msg="StartContainer for \"97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3\"" Dec 16 12:33:15.047948 kubelet[2318]: E1216 12:33:15.047855 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b2237633d99f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:33:14.282654111 +0000 UTC m=+0.921956961,LastTimestamp:2025-12-16 12:33:14.282654111 +0000 UTC m=+0.921956961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:33:15.048532 containerd[1534]: time="2025-12-16T12:33:15.048505831Z" level=info msg="connecting to shim 97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3" address="unix:///run/containerd/s/33a6cdfac0362f66833c1e9eefdb6502db6572aa5f2cd42b41b7f3eae77cdd42" protocol=ttrpc version=3 Dec 16 12:33:15.055403 systemd[1]: Started cri-containerd-d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8.scope - libcontainer container d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8. Dec 16 12:33:15.058208 systemd[1]: Started cri-containerd-36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4.scope - libcontainer container 36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4. Dec 16 12:33:15.064451 systemd[1]: Started cri-containerd-97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3.scope - libcontainer container 97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3. Dec 16 12:33:15.110564 containerd[1534]: time="2025-12-16T12:33:15.110449791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c6110d4b9de6aaca527401514c90af4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8\"" Dec 16 12:33:15.114313 containerd[1534]: time="2025-12-16T12:33:15.114269631Z" level=info msg="CreateContainer within sandbox \"d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:33:15.118526 containerd[1534]: time="2025-12-16T12:33:15.118493791Z" level=info msg="StartContainer for \"97446353877b349b4252082dc6c02b51dc1434af481e036c0c1079efa501d8d3\" returns successfully" Dec 16 12:33:15.118706 containerd[1534]: time="2025-12-16T12:33:15.118681231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4\"" Dec 16 12:33:15.121623 containerd[1534]: time="2025-12-16T12:33:15.121594831Z" level=info msg="CreateContainer within sandbox \"36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:33:15.122855 containerd[1534]: time="2025-12-16T12:33:15.122707111Z" level=info msg="Container 961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:15.133087 containerd[1534]: time="2025-12-16T12:33:15.132982711Z" level=info msg="CreateContainer within sandbox \"d50490103b5dc54290d3e997353c46ae783d2f8315ce2bacde4c469891813fe8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e\"" Dec 16 12:33:15.133631 containerd[1534]: time="2025-12-16T12:33:15.133422671Z" level=info msg="Container 7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:15.133949 containerd[1534]: time="2025-12-16T12:33:15.133792911Z" level=info msg="StartContainer for \"961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e\"" Dec 16 12:33:15.135887 containerd[1534]: time="2025-12-16T12:33:15.135639551Z" level=info msg="connecting to shim 961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e" address="unix:///run/containerd/s/972f336a7049d3f44222eefef9dca6b31df68557a11e3c2a3c02ff5736ea6ccb" protocol=ttrpc version=3 Dec 16 12:33:15.142020 containerd[1534]: time="2025-12-16T12:33:15.141983351Z" level=info msg="CreateContainer within sandbox \"36a47132145f23c6a26b415ee5ad60682c8f0f97e9cc3643c8a1e14842930cc4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835\"" Dec 16 12:33:15.143367 containerd[1534]: time="2025-12-16T12:33:15.143339191Z" level=info msg="StartContainer for \"7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835\"" Dec 16 12:33:15.145337 containerd[1534]: time="2025-12-16T12:33:15.145302391Z" level=info msg="connecting to shim 7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835" address="unix:///run/containerd/s/667ff51e9d186af44ae81b86c43ce39f183d368d50e54569ffbd2b0117d3fc37" protocol=ttrpc version=3 Dec 16 12:33:15.152181 kubelet[2318]: W1216 12:33:15.151069 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:15.152181 kubelet[2318]: E1216 12:33:15.151156 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:15.159343 systemd[1]: Started cri-containerd-961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e.scope - libcontainer container 961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e. Dec 16 12:33:15.162969 systemd[1]: Started cri-containerd-7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835.scope - libcontainer container 7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835. Dec 16 12:33:15.196983 kubelet[2318]: W1216 12:33:15.196894 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Dec 16 12:33:15.196983 kubelet[2318]: E1216 12:33:15.196957 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:33:15.241381 containerd[1534]: time="2025-12-16T12:33:15.241231711Z" level=info msg="StartContainer for \"7556441e1f9f2bf048dd1249f7930323262d901698453f7bb038c67ef0fc2835\" returns successfully" Dec 16 12:33:15.242898 containerd[1534]: time="2025-12-16T12:33:15.242811911Z" level=info msg="StartContainer for \"961e3352a5f620180168078e04fe67bdef6c27677c9e0e4ab24f00cee6237b0e\" returns successfully" Dec 16 12:33:15.283106 kubelet[2318]: I1216 12:33:15.283076 2318 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:15.490845 kubelet[2318]: E1216 12:33:15.490814 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:15.493700 kubelet[2318]: E1216 12:33:15.493675 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:15.497076 kubelet[2318]: E1216 12:33:15.497057 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:16.498730 kubelet[2318]: E1216 12:33:16.498684 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:16.499374 kubelet[2318]: E1216 12:33:16.499357 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:17.503851 kubelet[2318]: E1216 12:33:17.503784 2318 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:33:17.774508 kubelet[2318]: E1216 12:33:17.774179 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:33:17.843531 kubelet[2318]: I1216 12:33:17.843491 2318 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:33:17.843531 kubelet[2318]: E1216 12:33:17.843532 2318 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:33:17.889012 kubelet[2318]: I1216 12:33:17.888975 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:17.904523 kubelet[2318]: E1216 12:33:17.904489 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:17.904523 kubelet[2318]: I1216 12:33:17.904520 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:17.906245 kubelet[2318]: E1216 12:33:17.906222 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:17.906245 kubelet[2318]: I1216 12:33:17.906247 2318 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:17.908282 kubelet[2318]: E1216 12:33:17.908255 2318 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:18.280692 kubelet[2318]: I1216 12:33:18.280618 2318 apiserver.go:52] "Watching apiserver" Dec 16 12:33:18.288341 kubelet[2318]: I1216 12:33:18.288293 2318 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:33:20.299890 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... Dec 16 12:33:20.299908 systemd[1]: Reloading... Dec 16 12:33:20.369168 zram_generator::config[2637]: No configuration found. Dec 16 12:33:20.543538 systemd[1]: Reloading finished in 243 ms. Dec 16 12:33:20.569521 kubelet[2318]: I1216 12:33:20.569303 2318 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:33:20.569422 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:20.587254 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:33:20.587539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:20.587601 systemd[1]: kubelet.service: Consumed 1.323s CPU time, 129.9M memory peak. Dec 16 12:33:20.589610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:33:20.749289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:33:20.761820 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:33:20.811169 kubelet[2676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:20.811169 kubelet[2676]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:33:20.811169 kubelet[2676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:33:20.811530 kubelet[2676]: I1216 12:33:20.811315 2676 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:33:20.821428 kubelet[2676]: I1216 12:33:20.820770 2676 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:33:20.821428 kubelet[2676]: I1216 12:33:20.820982 2676 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:33:20.821559 kubelet[2676]: I1216 12:33:20.821468 2676 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:33:20.824201 kubelet[2676]: I1216 12:33:20.823469 2676 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:33:20.826295 kubelet[2676]: I1216 12:33:20.826264 2676 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:33:20.830243 kubelet[2676]: I1216 12:33:20.830192 2676 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:33:20.833258 kubelet[2676]: I1216 12:33:20.833232 2676 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:33:20.833514 kubelet[2676]: I1216 12:33:20.833464 2676 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:33:20.833690 kubelet[2676]: I1216 12:33:20.833496 2676 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:33:20.833690 kubelet[2676]: I1216 12:33:20.833688 2676 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:33:20.833788 kubelet[2676]: I1216 12:33:20.833698 2676 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:33:20.833788 kubelet[2676]: I1216 12:33:20.833742 2676 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:20.834138 kubelet[2676]: I1216 12:33:20.834058 2676 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:33:20.835231 kubelet[2676]: I1216 12:33:20.835153 2676 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:33:20.835231 kubelet[2676]: I1216 12:33:20.835236 2676 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:33:20.835853 kubelet[2676]: I1216 12:33:20.835249 2676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:33:20.836685 kubelet[2676]: I1216 12:33:20.836659 2676 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:33:20.837431 kubelet[2676]: I1216 12:33:20.837415 2676 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:33:20.838348 kubelet[2676]: I1216 12:33:20.838328 2676 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:33:20.838571 kubelet[2676]: I1216 12:33:20.838558 2676 server.go:1287] "Started kubelet" Dec 16 12:33:20.845552 kubelet[2676]: I1216 12:33:20.845513 2676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:33:20.848898 kubelet[2676]: I1216 12:33:20.848858 2676 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:33:20.849217 kubelet[2676]: I1216 12:33:20.849183 2676 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:33:20.851968 kubelet[2676]: I1216 12:33:20.851941 2676 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:33:20.852306 kubelet[2676]: I1216 12:33:20.852269 2676 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:33:20.852559 kubelet[2676]: I1216 12:33:20.852539 2676 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:33:20.852719 kubelet[2676]: I1216 12:33:20.852701 2676 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:33:20.853485 kubelet[2676]: I1216 12:33:20.853448 2676 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:33:20.853608 kubelet[2676]: I1216 12:33:20.853584 2676 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:33:20.854664 kubelet[2676]: E1216 12:33:20.854617 2676 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:33:20.856185 kubelet[2676]: I1216 12:33:20.840473 2676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:33:20.856185 kubelet[2676]: E1216 12:33:20.854998 2676 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:33:20.856185 kubelet[2676]: I1216 12:33:20.855340 2676 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:33:20.859101 kubelet[2676]: I1216 12:33:20.858408 2676 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:33:20.876432 kubelet[2676]: I1216 12:33:20.876385 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:33:20.878503 kubelet[2676]: I1216 12:33:20.878438 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:33:20.879149 kubelet[2676]: I1216 12:33:20.879112 2676 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:33:20.879594 kubelet[2676]: I1216 12:33:20.879574 2676 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:33:20.879594 kubelet[2676]: I1216 12:33:20.879589 2676 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:33:20.879683 kubelet[2676]: E1216 12:33:20.879635 2676 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904183 2676 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904210 2676 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904238 2676 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904419 2676 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904431 2676 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904451 2676 policy_none.go:49] "None policy: Start" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904460 2676 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904469 2676 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:33:20.904947 kubelet[2676]: I1216 12:33:20.904566 2676 state_mem.go:75] "Updated machine memory state" Dec 16 12:33:20.914745 kubelet[2676]: I1216 12:33:20.914713 2676 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:33:20.914929 kubelet[2676]: I1216 12:33:20.914907 2676 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:33:20.914983 kubelet[2676]: I1216 12:33:20.914923 2676 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:33:20.915230 kubelet[2676]: I1216 12:33:20.915204 2676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:33:20.916683 kubelet[2676]: E1216 12:33:20.916660 2676 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:33:20.980767 kubelet[2676]: I1216 12:33:20.980723 2676 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:20.980767 kubelet[2676]: I1216 12:33:20.980750 2676 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:20.980919 kubelet[2676]: I1216 12:33:20.980782 2676 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.017245 kubelet[2676]: I1216 12:33:21.017207 2676 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:33:21.027165 kubelet[2676]: I1216 12:33:21.026491 2676 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:33:21.027165 kubelet[2676]: I1216 12:33:21.026578 2676 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:33:21.053856 kubelet[2676]: I1216 12:33:21.053803 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.054048 kubelet[2676]: I1216 12:33:21.054031 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.054180 kubelet[2676]: I1216 12:33:21.054159 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.054260 kubelet[2676]: I1216 12:33:21.054248 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.054332 kubelet[2676]: I1216 12:33:21.054320 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.054390 kubelet[2676]: I1216 12:33:21.054380 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.054484 kubelet[2676]: I1216 12:33:21.054472 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6110d4b9de6aaca527401514c90af4b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c6110d4b9de6aaca527401514c90af4b\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.054560 kubelet[2676]: I1216 12:33:21.054548 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:33:21.054644 kubelet[2676]: I1216 12:33:21.054634 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:21.836831 kubelet[2676]: I1216 12:33:21.836788 2676 apiserver.go:52] "Watching apiserver" Dec 16 12:33:21.853155 kubelet[2676]: I1216 12:33:21.852846 2676 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:33:21.896718 kubelet[2676]: I1216 12:33:21.896486 2676 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:21.896718 kubelet[2676]: I1216 12:33:21.896587 2676 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.905177 kubelet[2676]: E1216 12:33:21.905118 2676 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:33:21.909226 kubelet[2676]: E1216 12:33:21.908193 2676 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:33:21.919037 kubelet[2676]: I1216 12:33:21.918964 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.918928277 podStartE2EDuration="1.918928277s" podCreationTimestamp="2025-12-16 12:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:21.918845396 +0000 UTC m=+1.153499091" watchObservedRunningTime="2025-12-16 12:33:21.918928277 +0000 UTC m=+1.153581972" Dec 16 12:33:21.941602 kubelet[2676]: I1216 12:33:21.941550 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.94153248 podStartE2EDuration="1.94153248s" podCreationTimestamp="2025-12-16 12:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:21.929291093 +0000 UTC m=+1.163944828" watchObservedRunningTime="2025-12-16 12:33:21.94153248 +0000 UTC m=+1.176186215" Dec 16 12:33:21.955859 kubelet[2676]: I1216 12:33:21.955359 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.955340395 podStartE2EDuration="1.955340395s" podCreationTimestamp="2025-12-16 12:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:21.941669761 +0000 UTC m=+1.176323496" watchObservedRunningTime="2025-12-16 12:33:21.955340395 +0000 UTC m=+1.189994130" Dec 16 12:33:26.067030 kubelet[2676]: I1216 12:33:26.066988 2676 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:33:26.067505 containerd[1534]: time="2025-12-16T12:33:26.067355900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:33:26.067867 kubelet[2676]: I1216 12:33:26.067535 2676 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:33:26.119570 systemd[1]: Created slice kubepods-besteffort-pod21ba8e11_68c7_4056_9878_9eee0c29e620.slice - libcontainer container kubepods-besteffort-pod21ba8e11_68c7_4056_9878_9eee0c29e620.slice. Dec 16 12:33:26.183847 kubelet[2676]: I1216 12:33:26.183797 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/21ba8e11-68c7-4056-9878-9eee0c29e620-kube-proxy\") pod \"kube-proxy-nhhfs\" (UID: \"21ba8e11-68c7-4056-9878-9eee0c29e620\") " pod="kube-system/kube-proxy-nhhfs" Dec 16 12:33:26.183978 kubelet[2676]: I1216 12:33:26.183862 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgmx9\" (UniqueName: \"kubernetes.io/projected/21ba8e11-68c7-4056-9878-9eee0c29e620-kube-api-access-pgmx9\") pod \"kube-proxy-nhhfs\" (UID: \"21ba8e11-68c7-4056-9878-9eee0c29e620\") " pod="kube-system/kube-proxy-nhhfs" Dec 16 12:33:26.183978 kubelet[2676]: I1216 12:33:26.183884 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21ba8e11-68c7-4056-9878-9eee0c29e620-xtables-lock\") pod \"kube-proxy-nhhfs\" (UID: \"21ba8e11-68c7-4056-9878-9eee0c29e620\") " pod="kube-system/kube-proxy-nhhfs" Dec 16 12:33:26.183978 kubelet[2676]: I1216 12:33:26.183901 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21ba8e11-68c7-4056-9878-9eee0c29e620-lib-modules\") pod \"kube-proxy-nhhfs\" (UID: \"21ba8e11-68c7-4056-9878-9eee0c29e620\") " pod="kube-system/kube-proxy-nhhfs" Dec 16 12:33:26.295221 kubelet[2676]: E1216 12:33:26.295158 2676 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:33:26.295221 kubelet[2676]: E1216 12:33:26.295197 2676 projected.go:194] Error preparing data for projected volume kube-api-access-pgmx9 for pod kube-system/kube-proxy-nhhfs: configmap "kube-root-ca.crt" not found Dec 16 12:33:26.295432 kubelet[2676]: E1216 12:33:26.295274 2676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/21ba8e11-68c7-4056-9878-9eee0c29e620-kube-api-access-pgmx9 podName:21ba8e11-68c7-4056-9878-9eee0c29e620 nodeName:}" failed. No retries permitted until 2025-12-16 12:33:26.795249439 +0000 UTC m=+6.029903134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pgmx9" (UniqueName: "kubernetes.io/projected/21ba8e11-68c7-4056-9878-9eee0c29e620-kube-api-access-pgmx9") pod "kube-proxy-nhhfs" (UID: "21ba8e11-68c7-4056-9878-9eee0c29e620") : configmap "kube-root-ca.crt" not found Dec 16 12:33:27.031193 containerd[1534]: time="2025-12-16T12:33:27.031117180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhhfs,Uid:21ba8e11-68c7-4056-9878-9eee0c29e620,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:27.052076 containerd[1534]: time="2025-12-16T12:33:27.052025817Z" level=info msg="connecting to shim c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4" address="unix:///run/containerd/s/7ba696f3a779ab007fb9d20193ab231b5069dc272ba51a60bf8e058d2fe72644" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:27.079340 systemd[1]: Started cri-containerd-c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4.scope - libcontainer container c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4. Dec 16 12:33:27.117760 containerd[1534]: time="2025-12-16T12:33:27.117718940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nhhfs,Uid:21ba8e11-68c7-4056-9878-9eee0c29e620,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4\"" Dec 16 12:33:27.121321 containerd[1534]: time="2025-12-16T12:33:27.121249233Z" level=info msg="CreateContainer within sandbox \"c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:33:27.136567 containerd[1534]: time="2025-12-16T12:33:27.136085088Z" level=info msg="Container 967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:27.148478 systemd[1]: Created slice kubepods-besteffort-podfa91db9b_cb66_4cee_822e_2bb953013af4.slice - libcontainer container kubepods-besteffort-podfa91db9b_cb66_4cee_822e_2bb953013af4.slice. Dec 16 12:33:27.151154 containerd[1534]: time="2025-12-16T12:33:27.151007383Z" level=info msg="CreateContainer within sandbox \"c1ddd06bae91ba019534998787b6878441f433d1711ffa326de59482743944c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31\"" Dec 16 12:33:27.152741 containerd[1534]: time="2025-12-16T12:33:27.152709869Z" level=info msg="StartContainer for \"967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31\"" Dec 16 12:33:27.155001 containerd[1534]: time="2025-12-16T12:33:27.154924398Z" level=info msg="connecting to shim 967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31" address="unix:///run/containerd/s/7ba696f3a779ab007fb9d20193ab231b5069dc272ba51a60bf8e058d2fe72644" protocol=ttrpc version=3 Dec 16 12:33:27.173356 systemd[1]: Started cri-containerd-967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31.scope - libcontainer container 967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31. Dec 16 12:33:27.189445 kubelet[2676]: I1216 12:33:27.189394 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa91db9b-cb66-4cee-822e-2bb953013af4-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7cs4p\" (UID: \"fa91db9b-cb66-4cee-822e-2bb953013af4\") " pod="tigera-operator/tigera-operator-7dcd859c48-7cs4p" Dec 16 12:33:27.189445 kubelet[2676]: I1216 12:33:27.189442 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nszz\" (UniqueName: \"kubernetes.io/projected/fa91db9b-cb66-4cee-822e-2bb953013af4-kube-api-access-7nszz\") pod \"tigera-operator-7dcd859c48-7cs4p\" (UID: \"fa91db9b-cb66-4cee-822e-2bb953013af4\") " pod="tigera-operator/tigera-operator-7dcd859c48-7cs4p" Dec 16 12:33:27.250249 containerd[1534]: time="2025-12-16T12:33:27.250189990Z" level=info msg="StartContainer for \"967739fa899b529e59335fbbcdca4cf933afe352afaec0c9bc38a46ebc142f31\" returns successfully" Dec 16 12:33:27.453084 containerd[1534]: time="2025-12-16T12:33:27.452994300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7cs4p,Uid:fa91db9b-cb66-4cee-822e-2bb953013af4,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:33:27.468205 containerd[1534]: time="2025-12-16T12:33:27.468123316Z" level=info msg="connecting to shim 0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16" address="unix:///run/containerd/s/d5590ccb450e35139de226aac068a7f122e6a6d24286b634f01e4983ccbddaea" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:27.492325 systemd[1]: Started cri-containerd-0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16.scope - libcontainer container 0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16. Dec 16 12:33:27.523185 containerd[1534]: time="2025-12-16T12:33:27.523100799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7cs4p,Uid:fa91db9b-cb66-4cee-822e-2bb953013af4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16\"" Dec 16 12:33:27.524965 containerd[1534]: time="2025-12-16T12:33:27.524840886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:33:27.919813 kubelet[2676]: I1216 12:33:27.919667 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nhhfs" podStartSLOduration=1.9196489460000001 podStartE2EDuration="1.919648946s" podCreationTimestamp="2025-12-16 12:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:33:27.919522786 +0000 UTC m=+7.154176521" watchObservedRunningTime="2025-12-16 12:33:27.919648946 +0000 UTC m=+7.154302681" Dec 16 12:33:30.016261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898022288.mount: Deactivated successfully. Dec 16 12:33:30.819029 containerd[1534]: time="2025-12-16T12:33:30.818935699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:30.820068 containerd[1534]: time="2025-12-16T12:33:30.819826502Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:33:30.820988 containerd[1534]: time="2025-12-16T12:33:30.820937426Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:30.824948 containerd[1534]: time="2025-12-16T12:33:30.824903838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:30.826305 containerd[1534]: time="2025-12-16T12:33:30.826258562Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.301381876s" Dec 16 12:33:30.826407 containerd[1534]: time="2025-12-16T12:33:30.826310162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:33:30.828275 containerd[1534]: time="2025-12-16T12:33:30.828237368Z" level=info msg="CreateContainer within sandbox \"0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:33:30.840291 containerd[1534]: time="2025-12-16T12:33:30.839649603Z" level=info msg="Container 1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:30.846909 containerd[1534]: time="2025-12-16T12:33:30.846848304Z" level=info msg="CreateContainer within sandbox \"0f3e71ba8f2b56c8ffe5c6f2b88166567c7baa3244c34dc737192d434f2aee16\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc\"" Dec 16 12:33:30.847475 containerd[1534]: time="2025-12-16T12:33:30.847433626Z" level=info msg="StartContainer for \"1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc\"" Dec 16 12:33:30.848701 containerd[1534]: time="2025-12-16T12:33:30.848658830Z" level=info msg="connecting to shim 1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc" address="unix:///run/containerd/s/d5590ccb450e35139de226aac068a7f122e6a6d24286b634f01e4983ccbddaea" protocol=ttrpc version=3 Dec 16 12:33:30.873374 systemd[1]: Started cri-containerd-1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc.scope - libcontainer container 1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc. Dec 16 12:33:30.902110 containerd[1534]: time="2025-12-16T12:33:30.901887272Z" level=info msg="StartContainer for \"1366afc60f43c05ee745ffd8a73c7f26a2fe73febd63729b332cb09a10cdaecc\" returns successfully" Dec 16 12:33:30.939144 kubelet[2676]: I1216 12:33:30.938443 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7cs4p" podStartSLOduration=0.635821904 podStartE2EDuration="3.938424664s" podCreationTimestamp="2025-12-16 12:33:27 +0000 UTC" firstStartedPulling="2025-12-16 12:33:27.524399524 +0000 UTC m=+6.759053259" lastFinishedPulling="2025-12-16 12:33:30.827002244 +0000 UTC m=+10.061656019" observedRunningTime="2025-12-16 12:33:30.938193263 +0000 UTC m=+10.172846998" watchObservedRunningTime="2025-12-16 12:33:30.938424664 +0000 UTC m=+10.173078399" Dec 16 12:33:33.046146 update_engine[1508]: I20251216 12:33:33.045332 1508 update_attempter.cc:509] Updating boot flags... Dec 16 12:33:36.362601 sudo[1746]: pam_unix(sudo:session): session closed for user root Dec 16 12:33:36.365159 sshd[1745]: Connection closed by 10.0.0.1 port 35796 Dec 16 12:33:36.366837 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:36.373621 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:33:36.374270 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:35796.service: Deactivated successfully. Dec 16 12:33:36.380304 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:33:36.380491 systemd[1]: session-7.scope: Consumed 7.918s CPU time, 220.7M memory peak. Dec 16 12:33:36.384405 systemd-logind[1507]: Removed session 7. Dec 16 12:33:42.978811 systemd[1]: Created slice kubepods-besteffort-podde086d22_95d2_4771_8002_64d22642138e.slice - libcontainer container kubepods-besteffort-podde086d22_95d2_4771_8002_64d22642138e.slice. Dec 16 12:33:42.988734 kubelet[2676]: I1216 12:33:42.988603 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de086d22-95d2-4771-8002-64d22642138e-tigera-ca-bundle\") pod \"calico-typha-6fb555b86b-wqt89\" (UID: \"de086d22-95d2-4771-8002-64d22642138e\") " pod="calico-system/calico-typha-6fb555b86b-wqt89" Dec 16 12:33:42.988734 kubelet[2676]: I1216 12:33:42.988646 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de086d22-95d2-4771-8002-64d22642138e-typha-certs\") pod \"calico-typha-6fb555b86b-wqt89\" (UID: \"de086d22-95d2-4771-8002-64d22642138e\") " pod="calico-system/calico-typha-6fb555b86b-wqt89" Dec 16 12:33:42.988734 kubelet[2676]: I1216 12:33:42.988667 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvsv\" (UniqueName: \"kubernetes.io/projected/de086d22-95d2-4771-8002-64d22642138e-kube-api-access-pgvsv\") pod \"calico-typha-6fb555b86b-wqt89\" (UID: \"de086d22-95d2-4771-8002-64d22642138e\") " pod="calico-system/calico-typha-6fb555b86b-wqt89" Dec 16 12:33:43.140093 systemd[1]: Created slice kubepods-besteffort-poddea74453_ba89_414d_bd64_212d699b6654.slice - libcontainer container kubepods-besteffort-poddea74453_ba89_414d_bd64_212d699b6654.slice. Dec 16 12:33:43.190524 kubelet[2676]: I1216 12:33:43.190487 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-xtables-lock\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.190766 kubelet[2676]: I1216 12:33:43.190691 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-flexvol-driver-host\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.190766 kubelet[2676]: I1216 12:33:43.190715 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-var-run-calico\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.190968 kubelet[2676]: I1216 12:33:43.190864 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-lib-modules\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.190968 kubelet[2676]: I1216 12:33:43.190886 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dea74453-ba89-414d-bd64-212d699b6654-node-certs\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.190968 kubelet[2676]: I1216 12:33:43.190901 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfsw\" (UniqueName: \"kubernetes.io/projected/dea74453-ba89-414d-bd64-212d699b6654-kube-api-access-5hfsw\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191191 kubelet[2676]: I1216 12:33:43.191078 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-cni-net-dir\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191191 kubelet[2676]: I1216 12:33:43.191103 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-policysync\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191191 kubelet[2676]: I1216 12:33:43.191119 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dea74453-ba89-414d-bd64-212d699b6654-tigera-ca-bundle\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191338 kubelet[2676]: I1216 12:33:43.191247 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-cni-bin-dir\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191338 kubelet[2676]: I1216 12:33:43.191283 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-var-lib-calico\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.191338 kubelet[2676]: I1216 12:33:43.191302 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dea74453-ba89-414d-bd64-212d699b6654-cni-log-dir\") pod \"calico-node-ctb9q\" (UID: \"dea74453-ba89-414d-bd64-212d699b6654\") " pod="calico-system/calico-node-ctb9q" Dec 16 12:33:43.284227 containerd[1534]: time="2025-12-16T12:33:43.284090232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb555b86b-wqt89,Uid:de086d22-95d2-4771-8002-64d22642138e,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:43.305313 kubelet[2676]: E1216 12:33:43.302304 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.305313 kubelet[2676]: W1216 12:33:43.302327 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.305313 kubelet[2676]: E1216 12:33:43.302347 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.321701 kubelet[2676]: E1216 12:33:43.321667 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.321701 kubelet[2676]: W1216 12:33:43.321689 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.321852 kubelet[2676]: E1216 12:33:43.321725 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.334398 kubelet[2676]: E1216 12:33:43.334042 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:43.339955 containerd[1534]: time="2025-12-16T12:33:43.339548505Z" level=info msg="connecting to shim c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7" address="unix:///run/containerd/s/af2eef5f9181c4dec95a159e5dce9d2746207b2d18d52f881a73fffe9911d87e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:43.391878 kubelet[2676]: E1216 12:33:43.391736 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.391878 kubelet[2676]: W1216 12:33:43.391862 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.392034 kubelet[2676]: E1216 12:33:43.391885 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.392906 kubelet[2676]: E1216 12:33:43.392800 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.392991 kubelet[2676]: W1216 12:33:43.392896 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.393193 kubelet[2676]: E1216 12:33:43.392941 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.393692 kubelet[2676]: E1216 12:33:43.393575 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.393692 kubelet[2676]: W1216 12:33:43.393689 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.393781 kubelet[2676]: E1216 12:33:43.393703 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.393866 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.394660 kubelet[2676]: W1216 12:33:43.393878 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.393904 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.394204 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.394660 kubelet[2676]: W1216 12:33:43.394215 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.394226 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.394464 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.394660 kubelet[2676]: W1216 12:33:43.394488 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.394660 kubelet[2676]: E1216 12:33:43.394500 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.394872 kubelet[2676]: E1216 12:33:43.394730 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.394872 kubelet[2676]: W1216 12:33:43.394740 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.394872 kubelet[2676]: E1216 12:33:43.394750 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.394932 kubelet[2676]: E1216 12:33:43.394915 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.394932 kubelet[2676]: W1216 12:33:43.394923 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.394970 kubelet[2676]: E1216 12:33:43.394933 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.395311 systemd[1]: Started cri-containerd-c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7.scope - libcontainer container c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7. Dec 16 12:33:43.395430 kubelet[2676]: E1216 12:33:43.395332 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.395430 kubelet[2676]: W1216 12:33:43.395342 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.395430 kubelet[2676]: E1216 12:33:43.395373 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.395662 kubelet[2676]: E1216 12:33:43.395643 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.395662 kubelet[2676]: W1216 12:33:43.395659 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.395722 kubelet[2676]: E1216 12:33:43.395674 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.395950 kubelet[2676]: E1216 12:33:43.395926 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.395950 kubelet[2676]: W1216 12:33:43.395944 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.396003 kubelet[2676]: E1216 12:33:43.395962 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.396003 kubelet[2676]: I1216 12:33:43.395988 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d21198e2-9674-4db1-a87b-fd2588ce9583-kubelet-dir\") pod \"csi-node-driver-vwtlw\" (UID: \"d21198e2-9674-4db1-a87b-fd2588ce9583\") " pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:43.396666 kubelet[2676]: E1216 12:33:43.396645 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.396666 kubelet[2676]: W1216 12:33:43.396661 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.396742 kubelet[2676]: E1216 12:33:43.396682 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.396742 kubelet[2676]: I1216 12:33:43.396703 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d21198e2-9674-4db1-a87b-fd2588ce9583-registration-dir\") pod \"csi-node-driver-vwtlw\" (UID: \"d21198e2-9674-4db1-a87b-fd2588ce9583\") " pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:43.397154 kubelet[2676]: E1216 12:33:43.396925 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397154 kubelet[2676]: W1216 12:33:43.396938 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397154 kubelet[2676]: E1216 12:33:43.396953 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.397154 kubelet[2676]: E1216 12:33:43.397080 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397154 kubelet[2676]: W1216 12:33:43.397087 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397154 kubelet[2676]: E1216 12:33:43.397106 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.397313 kubelet[2676]: E1216 12:33:43.397221 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397313 kubelet[2676]: W1216 12:33:43.397230 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397313 kubelet[2676]: E1216 12:33:43.397247 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.397421 kubelet[2676]: E1216 12:33:43.397364 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397421 kubelet[2676]: W1216 12:33:43.397371 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397421 kubelet[2676]: E1216 12:33:43.397382 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.397690 kubelet[2676]: E1216 12:33:43.397496 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397690 kubelet[2676]: W1216 12:33:43.397505 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397690 kubelet[2676]: E1216 12:33:43.397518 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.397690 kubelet[2676]: E1216 12:33:43.397762 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.397690 kubelet[2676]: W1216 12:33:43.397772 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.397690 kubelet[2676]: E1216 12:33:43.397785 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.398602 kubelet[2676]: E1216 12:33:43.398575 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.398602 kubelet[2676]: W1216 12:33:43.398596 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.398703 kubelet[2676]: E1216 12:33:43.398622 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.399532 kubelet[2676]: E1216 12:33:43.398803 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.399532 kubelet[2676]: W1216 12:33:43.398815 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.399532 kubelet[2676]: E1216 12:33:43.398836 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.399636 kubelet[2676]: E1216 12:33:43.399621 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.399656 kubelet[2676]: W1216 12:33:43.399635 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.399656 kubelet[2676]: E1216 12:33:43.399649 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.404755 kubelet[2676]: E1216 12:33:43.404384 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.404755 kubelet[2676]: W1216 12:33:43.404405 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.404755 kubelet[2676]: E1216 12:33:43.404422 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.404755 kubelet[2676]: E1216 12:33:43.404689 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.404755 kubelet[2676]: W1216 12:33:43.404699 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.404755 kubelet[2676]: E1216 12:33:43.404711 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.404947 kubelet[2676]: E1216 12:33:43.404838 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.404947 kubelet[2676]: W1216 12:33:43.404846 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.404947 kubelet[2676]: E1216 12:33:43.404854 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.405001 kubelet[2676]: E1216 12:33:43.404974 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.405001 kubelet[2676]: W1216 12:33:43.404981 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.405001 kubelet[2676]: E1216 12:33:43.404989 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.406360 kubelet[2676]: E1216 12:33:43.406331 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.406360 kubelet[2676]: W1216 12:33:43.406352 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.406360 kubelet[2676]: E1216 12:33:43.406366 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.445801 containerd[1534]: time="2025-12-16T12:33:43.445640885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctb9q,Uid:dea74453-ba89-414d-bd64-212d699b6654,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:43.452033 containerd[1534]: time="2025-12-16T12:33:43.451991173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb555b86b-wqt89,Uid:de086d22-95d2-4771-8002-64d22642138e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7\"" Dec 16 12:33:43.455570 containerd[1534]: time="2025-12-16T12:33:43.454984617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:33:43.466706 containerd[1534]: time="2025-12-16T12:33:43.466662273Z" level=info msg="connecting to shim 3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090" address="unix:///run/containerd/s/8de4ea0b1e10e6ce460a559c6bc253a8e2eb59d042405423ca199d05135c13a0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:43.487506 systemd[1]: Started cri-containerd-3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090.scope - libcontainer container 3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090. Dec 16 12:33:43.497972 kubelet[2676]: E1216 12:33:43.497930 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.497972 kubelet[2676]: W1216 12:33:43.497957 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.498138 kubelet[2676]: E1216 12:33:43.497989 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.498322 kubelet[2676]: E1216 12:33:43.498307 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.498361 kubelet[2676]: W1216 12:33:43.498322 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.498510 kubelet[2676]: E1216 12:33:43.498480 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.498589 kubelet[2676]: E1216 12:33:43.498573 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.498636 kubelet[2676]: W1216 12:33:43.498591 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.498636 kubelet[2676]: E1216 12:33:43.498604 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.498636 kubelet[2676]: I1216 12:33:43.498571 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d21198e2-9674-4db1-a87b-fd2588ce9583-socket-dir\") pod \"csi-node-driver-vwtlw\" (UID: \"d21198e2-9674-4db1-a87b-fd2588ce9583\") " pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:43.499113 kubelet[2676]: E1216 12:33:43.499100 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.499187 kubelet[2676]: W1216 12:33:43.499114 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.499187 kubelet[2676]: E1216 12:33:43.499141 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.499327 kubelet[2676]: E1216 12:33:43.499314 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.499386 kubelet[2676]: W1216 12:33:43.499328 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.499386 kubelet[2676]: E1216 12:33:43.499343 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.499869 kubelet[2676]: E1216 12:33:43.499685 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.499869 kubelet[2676]: W1216 12:33:43.499698 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.499869 kubelet[2676]: E1216 12:33:43.499709 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500408 kubelet[2676]: E1216 12:33:43.499881 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.500408 kubelet[2676]: W1216 12:33:43.499900 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.500408 kubelet[2676]: E1216 12:33:43.499916 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500408 kubelet[2676]: I1216 12:33:43.499933 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d21198e2-9674-4db1-a87b-fd2588ce9583-varrun\") pod \"csi-node-driver-vwtlw\" (UID: \"d21198e2-9674-4db1-a87b-fd2588ce9583\") " pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:43.500408 kubelet[2676]: E1216 12:33:43.500091 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.500408 kubelet[2676]: W1216 12:33:43.500101 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.500408 kubelet[2676]: E1216 12:33:43.500161 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500408 kubelet[2676]: E1216 12:33:43.500256 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.500408 kubelet[2676]: W1216 12:33:43.500264 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500306 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500416 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.500955 kubelet[2676]: W1216 12:33:43.500425 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500444 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500569 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.500955 kubelet[2676]: W1216 12:33:43.500577 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500587 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.500955 kubelet[2676]: I1216 12:33:43.500604 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bms\" (UniqueName: \"kubernetes.io/projected/d21198e2-9674-4db1-a87b-fd2588ce9583-kube-api-access-w7bms\") pod \"csi-node-driver-vwtlw\" (UID: \"d21198e2-9674-4db1-a87b-fd2588ce9583\") " pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:43.500955 kubelet[2676]: E1216 12:33:43.500759 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.501552 kubelet[2676]: W1216 12:33:43.500768 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.501552 kubelet[2676]: E1216 12:33:43.500783 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.501552 kubelet[2676]: E1216 12:33:43.500954 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.501552 kubelet[2676]: W1216 12:33:43.500962 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.501552 kubelet[2676]: E1216 12:33:43.500971 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.502617 kubelet[2676]: E1216 12:33:43.502481 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.502617 kubelet[2676]: W1216 12:33:43.502501 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.502617 kubelet[2676]: E1216 12:33:43.502523 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.502908 kubelet[2676]: E1216 12:33:43.502787 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.502908 kubelet[2676]: W1216 12:33:43.502802 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.502908 kubelet[2676]: E1216 12:33:43.502819 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.503231 kubelet[2676]: E1216 12:33:43.503050 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.503231 kubelet[2676]: W1216 12:33:43.503063 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.503231 kubelet[2676]: E1216 12:33:43.503091 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.503477 kubelet[2676]: E1216 12:33:43.503459 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.503579 kubelet[2676]: W1216 12:33:43.503564 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.503673 kubelet[2676]: E1216 12:33:43.503660 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.504193 kubelet[2676]: E1216 12:33:43.503892 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.504193 kubelet[2676]: W1216 12:33:43.504037 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.504193 kubelet[2676]: E1216 12:33:43.504052 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.504357 kubelet[2676]: E1216 12:33:43.504343 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.504827 kubelet[2676]: W1216 12:33:43.504761 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.504827 kubelet[2676]: E1216 12:33:43.504787 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.528139 containerd[1534]: time="2025-12-16T12:33:43.528091314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ctb9q,Uid:dea74453-ba89-414d-bd64-212d699b6654,Namespace:calico-system,Attempt:0,} returns sandbox id \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\"" Dec 16 12:33:43.601581 kubelet[2676]: E1216 12:33:43.601478 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.602247 kubelet[2676]: W1216 12:33:43.601893 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.602247 kubelet[2676]: E1216 12:33:43.601925 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.602791 kubelet[2676]: E1216 12:33:43.602752 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.603066 kubelet[2676]: W1216 12:33:43.603023 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.603474 kubelet[2676]: E1216 12:33:43.603335 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.603642 kubelet[2676]: E1216 12:33:43.603606 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.603642 kubelet[2676]: W1216 12:33:43.603623 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.603642 kubelet[2676]: E1216 12:33:43.603638 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.604072 kubelet[2676]: E1216 12:33:43.604052 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.604072 kubelet[2676]: W1216 12:33:43.604066 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.604170 kubelet[2676]: E1216 12:33:43.604083 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.604560 kubelet[2676]: E1216 12:33:43.604542 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.604560 kubelet[2676]: W1216 12:33:43.604555 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.604866 kubelet[2676]: E1216 12:33:43.604721 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.605290 kubelet[2676]: E1216 12:33:43.605272 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.605374 kubelet[2676]: W1216 12:33:43.605361 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.605451 kubelet[2676]: E1216 12:33:43.605439 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.605639 kubelet[2676]: E1216 12:33:43.605620 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.605639 kubelet[2676]: W1216 12:33:43.605634 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.605698 kubelet[2676]: E1216 12:33:43.605650 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.605874 kubelet[2676]: E1216 12:33:43.605860 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.605874 kubelet[2676]: W1216 12:33:43.605871 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.605944 kubelet[2676]: E1216 12:33:43.605893 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606051 kubelet[2676]: E1216 12:33:43.606041 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606097 kubelet[2676]: W1216 12:33:43.606051 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606097 kubelet[2676]: E1216 12:33:43.606064 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606235 kubelet[2676]: E1216 12:33:43.606222 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606235 kubelet[2676]: W1216 12:33:43.606233 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606285 kubelet[2676]: E1216 12:33:43.606249 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606429 kubelet[2676]: E1216 12:33:43.606417 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606429 kubelet[2676]: W1216 12:33:43.606428 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606511 kubelet[2676]: E1216 12:33:43.606495 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606605 kubelet[2676]: E1216 12:33:43.606588 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606605 kubelet[2676]: W1216 12:33:43.606598 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606861 kubelet[2676]: E1216 12:33:43.606654 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606861 kubelet[2676]: E1216 12:33:43.606739 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606861 kubelet[2676]: W1216 12:33:43.606746 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606861 kubelet[2676]: E1216 12:33:43.606756 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.606970 kubelet[2676]: E1216 12:33:43.606899 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.606970 kubelet[2676]: W1216 12:33:43.606914 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.606970 kubelet[2676]: E1216 12:33:43.606933 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.607117 kubelet[2676]: E1216 12:33:43.607100 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.607117 kubelet[2676]: W1216 12:33:43.607112 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.607244 kubelet[2676]: E1216 12:33:43.607121 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:43.617663 kubelet[2676]: E1216 12:33:43.617636 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:43.617663 kubelet[2676]: W1216 12:33:43.617656 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:43.617805 kubelet[2676]: E1216 12:33:43.617683 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:44.451311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359058404.mount: Deactivated successfully. Dec 16 12:33:44.881140 kubelet[2676]: E1216 12:33:44.880982 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:45.157032 containerd[1534]: time="2025-12-16T12:33:45.156909312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:45.157736 containerd[1534]: time="2025-12-16T12:33:45.157691273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:33:45.158829 containerd[1534]: time="2025-12-16T12:33:45.158802234Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:45.161072 containerd[1534]: time="2025-12-16T12:33:45.161034677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:45.162292 containerd[1534]: time="2025-12-16T12:33:45.162117878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.707085301s" Dec 16 12:33:45.162292 containerd[1534]: time="2025-12-16T12:33:45.162177918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:33:45.165348 containerd[1534]: time="2025-12-16T12:33:45.165294082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:33:45.190982 containerd[1534]: time="2025-12-16T12:33:45.190853031Z" level=info msg="CreateContainer within sandbox \"c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:33:45.203985 containerd[1534]: time="2025-12-16T12:33:45.202100644Z" level=info msg="Container 590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:45.211608 containerd[1534]: time="2025-12-16T12:33:45.211561535Z" level=info msg="CreateContainer within sandbox \"c8fb92ebf5223f9c3fd4f2067c56a9ade3b44fade0d857460d3c2af8bda388c7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172\"" Dec 16 12:33:45.212349 containerd[1534]: time="2025-12-16T12:33:45.212317976Z" level=info msg="StartContainer for \"590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172\"" Dec 16 12:33:45.213603 containerd[1534]: time="2025-12-16T12:33:45.213564498Z" level=info msg="connecting to shim 590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172" address="unix:///run/containerd/s/af2eef5f9181c4dec95a159e5dce9d2746207b2d18d52f881a73fffe9911d87e" protocol=ttrpc version=3 Dec 16 12:33:45.238370 systemd[1]: Started cri-containerd-590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172.scope - libcontainer container 590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172. Dec 16 12:33:45.284849 containerd[1534]: time="2025-12-16T12:33:45.284803620Z" level=info msg="StartContainer for \"590330a39bdadefc592f35747cc8c65eb86ed04d6ac9f0529eaf259d827b5172\" returns successfully" Dec 16 12:33:45.965981 kubelet[2676]: I1216 12:33:45.965874 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fb555b86b-wqt89" podStartSLOduration=2.254819463 podStartE2EDuration="3.965853768s" podCreationTimestamp="2025-12-16 12:33:42 +0000 UTC" firstStartedPulling="2025-12-16 12:33:43.453890656 +0000 UTC m=+22.688544391" lastFinishedPulling="2025-12-16 12:33:45.164924961 +0000 UTC m=+24.399578696" observedRunningTime="2025-12-16 12:33:45.965731488 +0000 UTC m=+25.200385223" watchObservedRunningTime="2025-12-16 12:33:45.965853768 +0000 UTC m=+25.200507503" Dec 16 12:33:46.020281 kubelet[2676]: E1216 12:33:46.020248 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.020281 kubelet[2676]: W1216 12:33:46.020274 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.020441 kubelet[2676]: E1216 12:33:46.020297 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.020488 kubelet[2676]: E1216 12:33:46.020472 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.020488 kubelet[2676]: W1216 12:33:46.020484 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.020566 kubelet[2676]: E1216 12:33:46.020494 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.020645 kubelet[2676]: E1216 12:33:46.020634 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.020645 kubelet[2676]: W1216 12:33:46.020644 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.020698 kubelet[2676]: E1216 12:33:46.020653 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.020795 kubelet[2676]: E1216 12:33:46.020784 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.020795 kubelet[2676]: W1216 12:33:46.020794 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.020856 kubelet[2676]: E1216 12:33:46.020804 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.020947 kubelet[2676]: E1216 12:33:46.020937 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.020947 kubelet[2676]: W1216 12:33:46.020947 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021004 kubelet[2676]: E1216 12:33:46.020955 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021118 kubelet[2676]: E1216 12:33:46.021107 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021197 kubelet[2676]: W1216 12:33:46.021123 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021197 kubelet[2676]: E1216 12:33:46.021150 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021305 kubelet[2676]: E1216 12:33:46.021294 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021305 kubelet[2676]: W1216 12:33:46.021305 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021361 kubelet[2676]: E1216 12:33:46.021314 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021449 kubelet[2676]: E1216 12:33:46.021439 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021488 kubelet[2676]: W1216 12:33:46.021449 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021488 kubelet[2676]: E1216 12:33:46.021459 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021595 kubelet[2676]: E1216 12:33:46.021585 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021595 kubelet[2676]: W1216 12:33:46.021595 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021654 kubelet[2676]: E1216 12:33:46.021603 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021753 kubelet[2676]: E1216 12:33:46.021743 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021753 kubelet[2676]: W1216 12:33:46.021753 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021812 kubelet[2676]: E1216 12:33:46.021761 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.021884 kubelet[2676]: E1216 12:33:46.021874 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.021911 kubelet[2676]: W1216 12:33:46.021885 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.021911 kubelet[2676]: E1216 12:33:46.021894 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.022018 kubelet[2676]: E1216 12:33:46.022008 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.022018 kubelet[2676]: W1216 12:33:46.022017 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.022081 kubelet[2676]: E1216 12:33:46.022025 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.022184 kubelet[2676]: E1216 12:33:46.022171 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.022212 kubelet[2676]: W1216 12:33:46.022184 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.022212 kubelet[2676]: E1216 12:33:46.022196 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.022366 kubelet[2676]: E1216 12:33:46.022354 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.022366 kubelet[2676]: W1216 12:33:46.022365 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.022425 kubelet[2676]: E1216 12:33:46.022375 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.022535 kubelet[2676]: E1216 12:33:46.022525 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.022535 kubelet[2676]: W1216 12:33:46.022535 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.022584 kubelet[2676]: E1216 12:33:46.022542 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.026650 kubelet[2676]: E1216 12:33:46.026612 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.026650 kubelet[2676]: W1216 12:33:46.026631 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.026650 kubelet[2676]: E1216 12:33:46.026646 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.026887 kubelet[2676]: E1216 12:33:46.026876 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.026887 kubelet[2676]: W1216 12:33:46.026887 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.026933 kubelet[2676]: E1216 12:33:46.026905 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.027102 kubelet[2676]: E1216 12:33:46.027090 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.027102 kubelet[2676]: W1216 12:33:46.027102 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.027193 kubelet[2676]: E1216 12:33:46.027115 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.027325 kubelet[2676]: E1216 12:33:46.027307 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.027347 kubelet[2676]: W1216 12:33:46.027325 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.027367 kubelet[2676]: E1216 12:33:46.027346 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.027524 kubelet[2676]: E1216 12:33:46.027512 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.027524 kubelet[2676]: W1216 12:33:46.027523 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.027600 kubelet[2676]: E1216 12:33:46.027536 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.027686 kubelet[2676]: E1216 12:33:46.027674 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.027686 kubelet[2676]: W1216 12:33:46.027684 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.027732 kubelet[2676]: E1216 12:33:46.027697 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.027864 kubelet[2676]: E1216 12:33:46.027852 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.027864 kubelet[2676]: W1216 12:33:46.027863 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.027956 kubelet[2676]: E1216 12:33:46.027877 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.028200 kubelet[2676]: E1216 12:33:46.028182 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.028277 kubelet[2676]: W1216 12:33:46.028265 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.028348 kubelet[2676]: E1216 12:33:46.028337 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.028576 kubelet[2676]: E1216 12:33:46.028562 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.028657 kubelet[2676]: W1216 12:33:46.028643 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.028732 kubelet[2676]: E1216 12:33:46.028712 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.029267 kubelet[2676]: E1216 12:33:46.028900 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.029267 kubelet[2676]: W1216 12:33:46.028912 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.029267 kubelet[2676]: E1216 12:33:46.028933 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.029624 kubelet[2676]: E1216 12:33:46.029568 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.029624 kubelet[2676]: W1216 12:33:46.029585 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.029756 kubelet[2676]: E1216 12:33:46.029608 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.029929 kubelet[2676]: E1216 12:33:46.029911 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.029929 kubelet[2676]: W1216 12:33:46.029928 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.029990 kubelet[2676]: E1216 12:33:46.029947 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.030342 kubelet[2676]: E1216 12:33:46.030217 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.030414 kubelet[2676]: W1216 12:33:46.030344 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.030414 kubelet[2676]: E1216 12:33:46.030361 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.030610 kubelet[2676]: E1216 12:33:46.030598 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.030640 kubelet[2676]: W1216 12:33:46.030610 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.030640 kubelet[2676]: E1216 12:33:46.030623 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.030854 kubelet[2676]: E1216 12:33:46.030840 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.030854 kubelet[2676]: W1216 12:33:46.030853 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.030918 kubelet[2676]: E1216 12:33:46.030868 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.031440 kubelet[2676]: E1216 12:33:46.031243 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.031440 kubelet[2676]: W1216 12:33:46.031262 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.031440 kubelet[2676]: E1216 12:33:46.031284 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.031826 kubelet[2676]: E1216 12:33:46.031800 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.031826 kubelet[2676]: W1216 12:33:46.031816 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.031905 kubelet[2676]: E1216 12:33:46.031833 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.032063 kubelet[2676]: E1216 12:33:46.032048 2676 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:33:46.032063 kubelet[2676]: W1216 12:33:46.032060 2676 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:33:46.032106 kubelet[2676]: E1216 12:33:46.032070 2676 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:33:46.109470 containerd[1534]: time="2025-12-16T12:33:46.109416887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:46.110203 containerd[1534]: time="2025-12-16T12:33:46.110174728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:33:46.111148 containerd[1534]: time="2025-12-16T12:33:46.110900528Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:46.113445 containerd[1534]: time="2025-12-16T12:33:46.113402931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:46.114227 containerd[1534]: time="2025-12-16T12:33:46.114196332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 948.008009ms" Dec 16 12:33:46.114322 containerd[1534]: time="2025-12-16T12:33:46.114306612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:33:46.116872 containerd[1534]: time="2025-12-16T12:33:46.116833015Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:33:46.147495 containerd[1534]: time="2025-12-16T12:33:46.146404127Z" level=info msg="Container 73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:46.153774 containerd[1534]: time="2025-12-16T12:33:46.153728015Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008\"" Dec 16 12:33:46.154468 containerd[1534]: time="2025-12-16T12:33:46.154437656Z" level=info msg="StartContainer for \"73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008\"" Dec 16 12:33:46.155997 containerd[1534]: time="2025-12-16T12:33:46.155915737Z" level=info msg="connecting to shim 73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008" address="unix:///run/containerd/s/8de4ea0b1e10e6ce460a559c6bc253a8e2eb59d042405423ca199d05135c13a0" protocol=ttrpc version=3 Dec 16 12:33:46.181333 systemd[1]: Started cri-containerd-73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008.scope - libcontainer container 73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008. Dec 16 12:33:46.263735 containerd[1534]: time="2025-12-16T12:33:46.263634414Z" level=info msg="StartContainer for \"73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008\" returns successfully" Dec 16 12:33:46.277643 systemd[1]: cri-containerd-73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008.scope: Deactivated successfully. Dec 16 12:33:46.316766 containerd[1534]: time="2025-12-16T12:33:46.316622432Z" level=info msg="received container exit event container_id:\"73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008\" id:\"73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008\" pid:3373 exited_at:{seconds:1765888426 nanos:307763502}" Dec 16 12:33:46.372410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73b7311bb949c28cca04f1e9288f246aac895913e9db702c260c3428e9c1d008-rootfs.mount: Deactivated successfully. Dec 16 12:33:46.881408 kubelet[2676]: E1216 12:33:46.880961 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:46.960155 kubelet[2676]: I1216 12:33:46.960096 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:33:46.961664 containerd[1534]: time="2025-12-16T12:33:46.961619692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:33:48.881370 kubelet[2676]: E1216 12:33:48.880946 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:50.145844 containerd[1534]: time="2025-12-16T12:33:50.145782082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:50.146865 containerd[1534]: time="2025-12-16T12:33:50.146759962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:33:50.147794 containerd[1534]: time="2025-12-16T12:33:50.147766163Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:50.149935 containerd[1534]: time="2025-12-16T12:33:50.149905725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:50.150750 containerd[1534]: time="2025-12-16T12:33:50.150576926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.188912514s" Dec 16 12:33:50.150750 containerd[1534]: time="2025-12-16T12:33:50.150616726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:33:50.154396 containerd[1534]: time="2025-12-16T12:33:50.154358009Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:33:50.162254 containerd[1534]: time="2025-12-16T12:33:50.162212215Z" level=info msg="Container 687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:50.171705 containerd[1534]: time="2025-12-16T12:33:50.171634783Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d\"" Dec 16 12:33:50.172445 containerd[1534]: time="2025-12-16T12:33:50.172415304Z" level=info msg="StartContainer for \"687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d\"" Dec 16 12:33:50.175167 containerd[1534]: time="2025-12-16T12:33:50.175101066Z" level=info msg="connecting to shim 687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d" address="unix:///run/containerd/s/8de4ea0b1e10e6ce460a559c6bc253a8e2eb59d042405423ca199d05135c13a0" protocol=ttrpc version=3 Dec 16 12:33:50.196338 systemd[1]: Started cri-containerd-687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d.scope - libcontainer container 687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d. Dec 16 12:33:50.268318 containerd[1534]: time="2025-12-16T12:33:50.268270264Z" level=info msg="StartContainer for \"687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d\" returns successfully" Dec 16 12:33:50.880148 kubelet[2676]: E1216 12:33:50.880061 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:51.087911 systemd[1]: cri-containerd-687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d.scope: Deactivated successfully. Dec 16 12:33:51.089040 systemd[1]: cri-containerd-687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d.scope: Consumed 493ms CPU time, 174M memory peak, 2.1M read from disk, 165.9M written to disk. Dec 16 12:33:51.089915 containerd[1534]: time="2025-12-16T12:33:51.089243428Z" level=info msg="received container exit event container_id:\"687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d\" id:\"687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d\" pid:3438 exited_at:{seconds:1765888431 nanos:88784028}" Dec 16 12:33:51.109860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687171e280c7c57c1cd390e00eb0e14a90a96d3f543b0aa70fec36d231da008d-rootfs.mount: Deactivated successfully. Dec 16 12:33:51.163065 kubelet[2676]: I1216 12:33:51.162930 2676 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:33:51.207052 kubelet[2676]: W1216 12:33:51.207009 2676 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 16 12:33:51.207201 kubelet[2676]: E1216 12:33:51.207067 2676 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Dec 16 12:33:51.211774 kubelet[2676]: W1216 12:33:51.211727 2676 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Dec 16 12:33:51.211920 kubelet[2676]: E1216 12:33:51.211780 2676 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Dec 16 12:33:51.212525 systemd[1]: Created slice kubepods-besteffort-podd618d708_5752_42c7_bde6_a99eef3e5715.slice - libcontainer container kubepods-besteffort-podd618d708_5752_42c7_bde6_a99eef3e5715.slice. Dec 16 12:33:51.220865 systemd[1]: Created slice kubepods-besteffort-poda11b445e_3d05_4dcc_ad5f_d55a1cff6339.slice - libcontainer container kubepods-besteffort-poda11b445e_3d05_4dcc_ad5f_d55a1cff6339.slice. Dec 16 12:33:51.229508 systemd[1]: Created slice kubepods-besteffort-podd783d037_6b14_4dbb_b519_84eab1f1deab.slice - libcontainer container kubepods-besteffort-podd783d037_6b14_4dbb_b519_84eab1f1deab.slice. Dec 16 12:33:51.236579 systemd[1]: Created slice kubepods-burstable-pode6af8fb3_d90c_4caf_af4e_e9051879291a.slice - libcontainer container kubepods-burstable-pode6af8fb3_d90c_4caf_af4e_e9051879291a.slice. Dec 16 12:33:51.247489 systemd[1]: Created slice kubepods-burstable-pod35872955_7dab_4f3b_a9ae_4b0261f37454.slice - libcontainer container kubepods-burstable-pod35872955_7dab_4f3b_a9ae_4b0261f37454.slice. Dec 16 12:33:51.256401 systemd[1]: Created slice kubepods-besteffort-pod6c344c36_ea3d_4229_86ca_f58b0ce00736.slice - libcontainer container kubepods-besteffort-pod6c344c36_ea3d_4229_86ca_f58b0ce00736.slice. Dec 16 12:33:51.262434 systemd[1]: Created slice kubepods-besteffort-pod147be4de_63f4_4902_ba20_f537cb8c893c.slice - libcontainer container kubepods-besteffort-pod147be4de_63f4_4902_ba20_f537cb8c893c.slice. Dec 16 12:33:51.267176 kubelet[2676]: I1216 12:33:51.266681 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsm9\" (UniqueName: \"kubernetes.io/projected/35872955-7dab-4f3b-a9ae-4b0261f37454-kube-api-access-tfsm9\") pod \"coredns-668d6bf9bc-sqrqd\" (UID: \"35872955-7dab-4f3b-a9ae-4b0261f37454\") " pod="kube-system/coredns-668d6bf9bc-sqrqd" Dec 16 12:33:51.267176 kubelet[2676]: I1216 12:33:51.266725 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a11b445e-3d05-4dcc-ad5f-d55a1cff6339-calico-apiserver-certs\") pod \"calico-apiserver-756cf9c5df-tjvzk\" (UID: \"a11b445e-3d05-4dcc-ad5f-d55a1cff6339\") " pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" Dec 16 12:33:51.267176 kubelet[2676]: I1216 12:33:51.266743 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d783d037-6b14-4dbb-b519-84eab1f1deab-config\") pod \"goldmane-666569f655-pmw9j\" (UID: \"d783d037-6b14-4dbb-b519-84eab1f1deab\") " pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.267176 kubelet[2676]: I1216 12:33:51.266789 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6af8fb3-d90c-4caf-af4e-e9051879291a-config-volume\") pod \"coredns-668d6bf9bc-62d7g\" (UID: \"e6af8fb3-d90c-4caf-af4e-e9051879291a\") " pod="kube-system/coredns-668d6bf9bc-62d7g" Dec 16 12:33:51.267176 kubelet[2676]: I1216 12:33:51.266806 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d618d708-5752-42c7-bde6-a99eef3e5715-tigera-ca-bundle\") pod \"calico-kube-controllers-6877579798-hmh9z\" (UID: \"d618d708-5752-42c7-bde6-a99eef3e5715\") " pod="calico-system/calico-kube-controllers-6877579798-hmh9z" Dec 16 12:33:51.267393 kubelet[2676]: I1216 12:33:51.266822 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drf8\" (UniqueName: \"kubernetes.io/projected/d618d708-5752-42c7-bde6-a99eef3e5715-kube-api-access-5drf8\") pod \"calico-kube-controllers-6877579798-hmh9z\" (UID: \"d618d708-5752-42c7-bde6-a99eef3e5715\") " pod="calico-system/calico-kube-controllers-6877579798-hmh9z" Dec 16 12:33:51.267393 kubelet[2676]: I1216 12:33:51.266843 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d783d037-6b14-4dbb-b519-84eab1f1deab-goldmane-ca-bundle\") pod \"goldmane-666569f655-pmw9j\" (UID: \"d783d037-6b14-4dbb-b519-84eab1f1deab\") " pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.267393 kubelet[2676]: I1216 12:33:51.266860 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-backend-key-pair\") pod \"whisker-5fb99cfb75-n6mtn\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " pod="calico-system/whisker-5fb99cfb75-n6mtn" Dec 16 12:33:51.267393 kubelet[2676]: I1216 12:33:51.266878 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35872955-7dab-4f3b-a9ae-4b0261f37454-config-volume\") pod \"coredns-668d6bf9bc-sqrqd\" (UID: \"35872955-7dab-4f3b-a9ae-4b0261f37454\") " pod="kube-system/coredns-668d6bf9bc-sqrqd" Dec 16 12:33:51.267393 kubelet[2676]: I1216 12:33:51.266894 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84l6\" (UniqueName: \"kubernetes.io/projected/6c344c36-ea3d-4229-86ca-f58b0ce00736-kube-api-access-q84l6\") pod \"whisker-5fb99cfb75-n6mtn\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " pod="calico-system/whisker-5fb99cfb75-n6mtn" Dec 16 12:33:51.267526 kubelet[2676]: I1216 12:33:51.266914 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnrc\" (UniqueName: \"kubernetes.io/projected/147be4de-63f4-4902-ba20-f537cb8c893c-kube-api-access-mxnrc\") pod \"calico-apiserver-756cf9c5df-g7rtb\" (UID: \"147be4de-63f4-4902-ba20-f537cb8c893c\") " pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" Dec 16 12:33:51.267526 kubelet[2676]: I1216 12:33:51.266934 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-ca-bundle\") pod \"whisker-5fb99cfb75-n6mtn\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " pod="calico-system/whisker-5fb99cfb75-n6mtn" Dec 16 12:33:51.267526 kubelet[2676]: I1216 12:33:51.266952 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/147be4de-63f4-4902-ba20-f537cb8c893c-calico-apiserver-certs\") pod \"calico-apiserver-756cf9c5df-g7rtb\" (UID: \"147be4de-63f4-4902-ba20-f537cb8c893c\") " pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" Dec 16 12:33:51.267526 kubelet[2676]: I1216 12:33:51.266968 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl5n7\" (UniqueName: \"kubernetes.io/projected/a11b445e-3d05-4dcc-ad5f-d55a1cff6339-kube-api-access-vl5n7\") pod \"calico-apiserver-756cf9c5df-tjvzk\" (UID: \"a11b445e-3d05-4dcc-ad5f-d55a1cff6339\") " pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" Dec 16 12:33:51.267526 kubelet[2676]: I1216 12:33:51.266985 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d783d037-6b14-4dbb-b519-84eab1f1deab-goldmane-key-pair\") pod \"goldmane-666569f655-pmw9j\" (UID: \"d783d037-6b14-4dbb-b519-84eab1f1deab\") " pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.267632 kubelet[2676]: I1216 12:33:51.267004 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz4s6\" (UniqueName: \"kubernetes.io/projected/e6af8fb3-d90c-4caf-af4e-e9051879291a-kube-api-access-lz4s6\") pod \"coredns-668d6bf9bc-62d7g\" (UID: \"e6af8fb3-d90c-4caf-af4e-e9051879291a\") " pod="kube-system/coredns-668d6bf9bc-62d7g" Dec 16 12:33:51.267632 kubelet[2676]: I1216 12:33:51.267019 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk9g9\" (UniqueName: \"kubernetes.io/projected/d783d037-6b14-4dbb-b519-84eab1f1deab-kube-api-access-dk9g9\") pod \"goldmane-666569f655-pmw9j\" (UID: \"d783d037-6b14-4dbb-b519-84eab1f1deab\") " pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.520078 containerd[1534]: time="2025-12-16T12:33:51.520019566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6877579798-hmh9z,Uid:d618d708-5752-42c7-bde6-a99eef3e5715,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:51.525304 containerd[1534]: time="2025-12-16T12:33:51.525250891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-tjvzk,Uid:a11b445e-3d05-4dcc-ad5f-d55a1cff6339,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:33:51.532733 containerd[1534]: time="2025-12-16T12:33:51.532687416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmw9j,Uid:d783d037-6b14-4dbb-b519-84eab1f1deab,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:51.544981 containerd[1534]: time="2025-12-16T12:33:51.544927346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-62d7g,Uid:e6af8fb3-d90c-4caf-af4e-e9051879291a,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:51.553904 containerd[1534]: time="2025-12-16T12:33:51.553849193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqrqd,Uid:35872955-7dab-4f3b-a9ae-4b0261f37454,Namespace:kube-system,Attempt:0,}" Dec 16 12:33:51.567930 containerd[1534]: time="2025-12-16T12:33:51.567876964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-g7rtb,Uid:147be4de-63f4-4902-ba20-f537cb8c893c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:33:51.645393 containerd[1534]: time="2025-12-16T12:33:51.645327025Z" level=error msg="Failed to destroy network for sandbox \"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.649411 containerd[1534]: time="2025-12-16T12:33:51.649294588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-62d7g,Uid:e6af8fb3-d90c-4caf-af4e-e9051879291a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.649691 kubelet[2676]: E1216 12:33:51.649622 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.649756 kubelet[2676]: E1216 12:33:51.649726 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-62d7g" Dec 16 12:33:51.649756 kubelet[2676]: E1216 12:33:51.649747 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-62d7g" Dec 16 12:33:51.649826 kubelet[2676]: E1216 12:33:51.649795 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-62d7g_kube-system(e6af8fb3-d90c-4caf-af4e-e9051879291a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-62d7g_kube-system(e6af8fb3-d90c-4caf-af4e-e9051879291a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8526f18a51aca00df5094300209c9b1109be588fe449925b92c04ed3f1599d76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-62d7g" podUID="e6af8fb3-d90c-4caf-af4e-e9051879291a" Dec 16 12:33:51.650069 containerd[1534]: time="2025-12-16T12:33:51.650039909Z" level=error msg="Failed to destroy network for sandbox \"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.652726 containerd[1534]: time="2025-12-16T12:33:51.651580070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6877579798-hmh9z,Uid:d618d708-5752-42c7-bde6-a99eef3e5715,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.652882 kubelet[2676]: E1216 12:33:51.651840 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.652882 kubelet[2676]: E1216 12:33:51.651889 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" Dec 16 12:33:51.652882 kubelet[2676]: E1216 12:33:51.651908 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" Dec 16 12:33:51.653157 kubelet[2676]: E1216 12:33:51.651945 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6877579798-hmh9z_calico-system(d618d708-5752-42c7-bde6-a99eef3e5715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6877579798-hmh9z_calico-system(d618d708-5752-42c7-bde6-a99eef3e5715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8da76ab8f5710080360d12bc5f12ad40336734d7a0fba38c5cafad024cf1ed5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:33:51.661972 containerd[1534]: time="2025-12-16T12:33:51.661905718Z" level=error msg="Failed to destroy network for sandbox \"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.663967 containerd[1534]: time="2025-12-16T12:33:51.663905680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-tjvzk,Uid:a11b445e-3d05-4dcc-ad5f-d55a1cff6339,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.664244 kubelet[2676]: E1216 12:33:51.664199 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.664342 kubelet[2676]: E1216 12:33:51.664278 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" Dec 16 12:33:51.664342 kubelet[2676]: E1216 12:33:51.664300 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" Dec 16 12:33:51.664470 kubelet[2676]: E1216 12:33:51.664347 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756cf9c5df-tjvzk_calico-apiserver(a11b445e-3d05-4dcc-ad5f-d55a1cff6339)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756cf9c5df-tjvzk_calico-apiserver(a11b445e-3d05-4dcc-ad5f-d55a1cff6339)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3957268007b22a5b6e1831c1ecea7ee4ab31a8dcb1e8d9d342bf82cb7dae9a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:33:51.670395 containerd[1534]: time="2025-12-16T12:33:51.670335925Z" level=error msg="Failed to destroy network for sandbox \"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.674155 containerd[1534]: time="2025-12-16T12:33:51.673963407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqrqd,Uid:35872955-7dab-4f3b-a9ae-4b0261f37454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.674675 kubelet[2676]: E1216 12:33:51.674630 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.674742 kubelet[2676]: E1216 12:33:51.674705 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sqrqd" Dec 16 12:33:51.674742 kubelet[2676]: E1216 12:33:51.674726 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sqrqd" Dec 16 12:33:51.674844 kubelet[2676]: E1216 12:33:51.674806 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sqrqd_kube-system(35872955-7dab-4f3b-a9ae-4b0261f37454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sqrqd_kube-system(35872955-7dab-4f3b-a9ae-4b0261f37454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cba2103af1491a5f59722aeef33be6492e3b3ac68848bb23d8d6539ae76c91e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sqrqd" podUID="35872955-7dab-4f3b-a9ae-4b0261f37454" Dec 16 12:33:51.678540 containerd[1534]: time="2025-12-16T12:33:51.678486811Z" level=error msg="Failed to destroy network for sandbox \"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.679757 containerd[1534]: time="2025-12-16T12:33:51.679652932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmw9j,Uid:d783d037-6b14-4dbb-b519-84eab1f1deab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.680008 kubelet[2676]: E1216 12:33:51.679941 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.680063 kubelet[2676]: E1216 12:33:51.680033 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.680169 kubelet[2676]: E1216 12:33:51.680056 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pmw9j" Dec 16 12:33:51.680398 kubelet[2676]: E1216 12:33:51.680319 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pmw9j_calico-system(d783d037-6b14-4dbb-b519-84eab1f1deab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pmw9j_calico-system(d783d037-6b14-4dbb-b519-84eab1f1deab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9bd8e5c2c22f4806bbcad260f4f7b8b7ba32e1720b5a7f01239a9de4a3e27d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:33:51.682266 containerd[1534]: time="2025-12-16T12:33:51.682221574Z" level=error msg="Failed to destroy network for sandbox \"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.683411 containerd[1534]: time="2025-12-16T12:33:51.683374295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-g7rtb,Uid:147be4de-63f4-4902-ba20-f537cb8c893c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.683641 kubelet[2676]: E1216 12:33:51.683603 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:51.683681 kubelet[2676]: E1216 12:33:51.683664 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" Dec 16 12:33:51.683732 kubelet[2676]: E1216 12:33:51.683688 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" Dec 16 12:33:51.683779 kubelet[2676]: E1216 12:33:51.683754 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-756cf9c5df-g7rtb_calico-apiserver(147be4de-63f4-4902-ba20-f537cb8c893c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-756cf9c5df-g7rtb_calico-apiserver(147be4de-63f4-4902-ba20-f537cb8c893c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d54e00f65f0e749667e63be8df1dfb54e830c3f7c42660b64d1cd78caeaf2ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:33:51.977418 containerd[1534]: time="2025-12-16T12:33:51.977339686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:33:52.461179 containerd[1534]: time="2025-12-16T12:33:52.461101363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb99cfb75-n6mtn,Uid:6c344c36-ea3d-4229-86ca-f58b0ce00736,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:52.506331 containerd[1534]: time="2025-12-16T12:33:52.506263917Z" level=error msg="Failed to destroy network for sandbox \"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.508382 systemd[1]: run-netns-cni\x2d9753a064\x2dc97c\x2dd6fd\x2df6a2\x2da6bcae7e26c2.mount: Deactivated successfully. Dec 16 12:33:52.509587 containerd[1534]: time="2025-12-16T12:33:52.509530199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fb99cfb75-n6mtn,Uid:6c344c36-ea3d-4229-86ca-f58b0ce00736,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.509913 kubelet[2676]: E1216 12:33:52.509873 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.510436 kubelet[2676]: E1216 12:33:52.509948 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb99cfb75-n6mtn" Dec 16 12:33:52.510436 kubelet[2676]: E1216 12:33:52.509977 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fb99cfb75-n6mtn" Dec 16 12:33:52.510436 kubelet[2676]: E1216 12:33:52.510025 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fb99cfb75-n6mtn_calico-system(6c344c36-ea3d-4229-86ca-f58b0ce00736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fb99cfb75-n6mtn_calico-system(6c344c36-ea3d-4229-86ca-f58b0ce00736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b770cf52f382272319eb393670aac0d171e5f6e5d293b72840f0229aee86c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fb99cfb75-n6mtn" podUID="6c344c36-ea3d-4229-86ca-f58b0ce00736" Dec 16 12:33:52.886780 systemd[1]: Created slice kubepods-besteffort-podd21198e2_9674_4db1_a87b_fd2588ce9583.slice - libcontainer container kubepods-besteffort-podd21198e2_9674_4db1_a87b_fd2588ce9583.slice. Dec 16 12:33:52.889777 containerd[1534]: time="2025-12-16T12:33:52.889634119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vwtlw,Uid:d21198e2-9674-4db1-a87b-fd2588ce9583,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:52.953267 containerd[1534]: time="2025-12-16T12:33:52.953085206Z" level=error msg="Failed to destroy network for sandbox \"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.955394 systemd[1]: run-netns-cni\x2d044c623b\x2dd24c\x2d3cb6\x2dae6b\x2d7f4acdbccc18.mount: Deactivated successfully. Dec 16 12:33:52.961244 containerd[1534]: time="2025-12-16T12:33:52.960894892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vwtlw,Uid:d21198e2-9674-4db1-a87b-fd2588ce9583,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.961597 kubelet[2676]: E1216 12:33:52.961527 2676 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:33:52.961657 kubelet[2676]: E1216 12:33:52.961596 2676 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:52.961657 kubelet[2676]: E1216 12:33:52.961630 2676 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vwtlw" Dec 16 12:33:52.961779 kubelet[2676]: E1216 12:33:52.961668 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40454437d1b37eae298cc17407081e23e9efef6ebc60c13b6aaa59d5aa32ccd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:33:55.743211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762747487.mount: Deactivated successfully. Dec 16 12:33:56.040815 containerd[1534]: time="2025-12-16T12:33:56.040345529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:56.044944 containerd[1534]: time="2025-12-16T12:33:56.044901572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:33:56.049139 containerd[1534]: time="2025-12-16T12:33:56.049071134Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:56.052592 containerd[1534]: time="2025-12-16T12:33:56.052551656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:33:56.053308 containerd[1534]: time="2025-12-16T12:33:56.053275657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.075857731s" Dec 16 12:33:56.053350 containerd[1534]: time="2025-12-16T12:33:56.053314937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:33:56.063930 containerd[1534]: time="2025-12-16T12:33:56.062173982Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:33:56.087146 containerd[1534]: time="2025-12-16T12:33:56.086101195Z" level=info msg="Container 6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:33:56.099121 containerd[1534]: time="2025-12-16T12:33:56.099069243Z" level=info msg="CreateContainer within sandbox \"3782561e2d4a4eb97482dbd7e35598ddf74d3e35b51327855bc18c1775613090\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80\"" Dec 16 12:33:56.099823 containerd[1534]: time="2025-12-16T12:33:56.099797243Z" level=info msg="StartContainer for \"6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80\"" Dec 16 12:33:56.101650 containerd[1534]: time="2025-12-16T12:33:56.101614564Z" level=info msg="connecting to shim 6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80" address="unix:///run/containerd/s/8de4ea0b1e10e6ce460a559c6bc253a8e2eb59d042405423ca199d05135c13a0" protocol=ttrpc version=3 Dec 16 12:33:56.132382 systemd[1]: Started cri-containerd-6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80.scope - libcontainer container 6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80. Dec 16 12:33:56.210099 containerd[1534]: time="2025-12-16T12:33:56.209919266Z" level=info msg="StartContainer for \"6016f3f19c54154f4a2bd6b6ebcff6d9c2f440105b9d7563f6f8b38692ef3b80\" returns successfully" Dec 16 12:33:56.339931 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:33:56.340042 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:33:56.504704 kubelet[2676]: I1216 12:33:56.504397 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-backend-key-pair\") pod \"6c344c36-ea3d-4229-86ca-f58b0ce00736\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " Dec 16 12:33:56.504704 kubelet[2676]: I1216 12:33:56.504444 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-ca-bundle\") pod \"6c344c36-ea3d-4229-86ca-f58b0ce00736\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " Dec 16 12:33:56.504704 kubelet[2676]: I1216 12:33:56.504472 2676 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q84l6\" (UniqueName: \"kubernetes.io/projected/6c344c36-ea3d-4229-86ca-f58b0ce00736-kube-api-access-q84l6\") pod \"6c344c36-ea3d-4229-86ca-f58b0ce00736\" (UID: \"6c344c36-ea3d-4229-86ca-f58b0ce00736\") " Dec 16 12:33:56.509015 kubelet[2676]: I1216 12:33:56.508976 2676 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c344c36-ea3d-4229-86ca-f58b0ce00736" (UID: "6c344c36-ea3d-4229-86ca-f58b0ce00736"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:33:56.520395 kubelet[2676]: I1216 12:33:56.520334 2676 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c344c36-ea3d-4229-86ca-f58b0ce00736" (UID: "6c344c36-ea3d-4229-86ca-f58b0ce00736"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:33:56.520518 kubelet[2676]: I1216 12:33:56.520454 2676 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c344c36-ea3d-4229-86ca-f58b0ce00736-kube-api-access-q84l6" (OuterVolumeSpecName: "kube-api-access-q84l6") pod "6c344c36-ea3d-4229-86ca-f58b0ce00736" (UID: "6c344c36-ea3d-4229-86ca-f58b0ce00736"). InnerVolumeSpecName "kube-api-access-q84l6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:33:56.604824 kubelet[2676]: I1216 12:33:56.604765 2676 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:33:56.604824 kubelet[2676]: I1216 12:33:56.604806 2676 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c344c36-ea3d-4229-86ca-f58b0ce00736-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:33:56.604824 kubelet[2676]: I1216 12:33:56.604815 2676 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q84l6\" (UniqueName: \"kubernetes.io/projected/6c344c36-ea3d-4229-86ca-f58b0ce00736-kube-api-access-q84l6\") on node \"localhost\" DevicePath \"\"" Dec 16 12:33:56.744060 systemd[1]: var-lib-kubelet-pods-6c344c36\x2dea3d\x2d4229\x2d86ca\x2df58b0ce00736-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:33:56.744183 systemd[1]: var-lib-kubelet-pods-6c344c36\x2dea3d\x2d4229\x2d86ca\x2df58b0ce00736-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq84l6.mount: Deactivated successfully. Dec 16 12:33:56.905479 systemd[1]: Removed slice kubepods-besteffort-pod6c344c36_ea3d_4229_86ca_f58b0ce00736.slice - libcontainer container kubepods-besteffort-pod6c344c36_ea3d_4229_86ca_f58b0ce00736.slice. Dec 16 12:33:57.031705 kubelet[2676]: I1216 12:33:57.031621 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ctb9q" podStartSLOduration=1.5052167889999999 podStartE2EDuration="14.031601132s" podCreationTimestamp="2025-12-16 12:33:43 +0000 UTC" firstStartedPulling="2025-12-16 12:33:43.529217435 +0000 UTC m=+22.763871170" lastFinishedPulling="2025-12-16 12:33:56.055601778 +0000 UTC m=+35.290255513" observedRunningTime="2025-12-16 12:33:57.030549892 +0000 UTC m=+36.265203627" watchObservedRunningTime="2025-12-16 12:33:57.031601132 +0000 UTC m=+36.266254867" Dec 16 12:33:57.145421 systemd[1]: Created slice kubepods-besteffort-pod3fab4643_a193_458b_8824_ad84f0104f4a.slice - libcontainer container kubepods-besteffort-pod3fab4643_a193_458b_8824_ad84f0104f4a.slice. Dec 16 12:33:57.209249 kubelet[2676]: I1216 12:33:57.209185 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3fab4643-a193-458b-8824-ad84f0104f4a-whisker-backend-key-pair\") pod \"whisker-545c7bf659-q24lc\" (UID: \"3fab4643-a193-458b-8824-ad84f0104f4a\") " pod="calico-system/whisker-545c7bf659-q24lc" Dec 16 12:33:57.209363 kubelet[2676]: I1216 12:33:57.209262 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvb4\" (UniqueName: \"kubernetes.io/projected/3fab4643-a193-458b-8824-ad84f0104f4a-kube-api-access-jtvb4\") pod \"whisker-545c7bf659-q24lc\" (UID: \"3fab4643-a193-458b-8824-ad84f0104f4a\") " pod="calico-system/whisker-545c7bf659-q24lc" Dec 16 12:33:57.209363 kubelet[2676]: I1216 12:33:57.209291 2676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fab4643-a193-458b-8824-ad84f0104f4a-whisker-ca-bundle\") pod \"whisker-545c7bf659-q24lc\" (UID: \"3fab4643-a193-458b-8824-ad84f0104f4a\") " pod="calico-system/whisker-545c7bf659-q24lc" Dec 16 12:33:57.451154 containerd[1534]: time="2025-12-16T12:33:57.451068596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c7bf659-q24lc,Uid:3fab4643-a193-458b-8824-ad84f0104f4a,Namespace:calico-system,Attempt:0,}" Dec 16 12:33:57.669331 systemd-networkd[1440]: cali2ce656647b1: Link UP Dec 16 12:33:57.670965 systemd-networkd[1440]: cali2ce656647b1: Gained carrier Dec 16 12:33:57.698344 containerd[1534]: 2025-12-16 12:33:57.475 [INFO][3816] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:33:57.698344 containerd[1534]: 2025-12-16 12:33:57.514 [INFO][3816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--545c7bf659--q24lc-eth0 whisker-545c7bf659- calico-system 3fab4643-a193-458b-8824-ad84f0104f4a 854 0 2025-12-16 12:33:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:545c7bf659 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-545c7bf659-q24lc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2ce656647b1 [] [] }} ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-" Dec 16 12:33:57.698344 containerd[1534]: 2025-12-16 12:33:57.515 [INFO][3816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.698344 containerd[1534]: 2025-12-16 12:33:57.602 [INFO][3831] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" HandleID="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Workload="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.603 [INFO][3831] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" HandleID="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Workload="localhost-k8s-whisker--545c7bf659--q24lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000426c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-545c7bf659-q24lc", "timestamp":"2025-12-16 12:33:57.602865277 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.603 [INFO][3831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.603 [INFO][3831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.603 [INFO][3831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.614 [INFO][3831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" host="localhost" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.623 [INFO][3831] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.629 [INFO][3831] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.631 [INFO][3831] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.635 [INFO][3831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:33:57.698664 containerd[1534]: 2025-12-16 12:33:57.635 [INFO][3831] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" host="localhost" Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.637 [INFO][3831] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7 Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.643 [INFO][3831] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" host="localhost" Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.652 [INFO][3831] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" host="localhost" Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.652 [INFO][3831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" host="localhost" Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.652 [INFO][3831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:33:57.698957 containerd[1534]: 2025-12-16 12:33:57.652 [INFO][3831] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" HandleID="k8s-pod-network.bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Workload="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.699423 containerd[1534]: 2025-12-16 12:33:57.658 [INFO][3816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--545c7bf659--q24lc-eth0", GenerateName:"whisker-545c7bf659-", Namespace:"calico-system", SelfLink:"", UID:"3fab4643-a193-458b-8824-ad84f0104f4a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"545c7bf659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-545c7bf659-q24lc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ce656647b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:33:57.699423 containerd[1534]: 2025-12-16 12:33:57.658 [INFO][3816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.700245 containerd[1534]: 2025-12-16 12:33:57.658 [INFO][3816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ce656647b1 ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.700245 containerd[1534]: 2025-12-16 12:33:57.673 [INFO][3816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.700300 containerd[1534]: 2025-12-16 12:33:57.673 [INFO][3816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--545c7bf659--q24lc-eth0", GenerateName:"whisker-545c7bf659-", Namespace:"calico-system", SelfLink:"", UID:"3fab4643-a193-458b-8824-ad84f0104f4a", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"545c7bf659", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7", Pod:"whisker-545c7bf659-q24lc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ce656647b1", MAC:"a6:3f:09:f2:15:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:33:57.700367 containerd[1534]: 2025-12-16 12:33:57.690 [INFO][3816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" Namespace="calico-system" Pod="whisker-545c7bf659-q24lc" WorkloadEndpoint="localhost-k8s-whisker--545c7bf659--q24lc-eth0" Dec 16 12:33:57.903696 containerd[1534]: time="2025-12-16T12:33:57.903638998Z" level=info msg="connecting to shim bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7" address="unix:///run/containerd/s/37dbd1526c718ff13a11558ef6f2972649a0cfa166e574cecb0b16cab7e263d7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:33:57.931374 systemd[1]: Started cri-containerd-bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7.scope - libcontainer container bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7. Dec 16 12:33:57.943440 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:33:57.972918 containerd[1534]: time="2025-12-16T12:33:57.972879395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545c7bf659-q24lc,Uid:3fab4643-a193-458b-8824-ad84f0104f4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcf758aea66462dad3ef8dea4f056c5af13df1471d966d53d3e754aa3993d3d7\"" Dec 16 12:33:57.976908 containerd[1534]: time="2025-12-16T12:33:57.976877317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:33:57.993891 kubelet[2676]: I1216 12:33:57.993847 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:33:58.242550 containerd[1534]: time="2025-12-16T12:33:58.242492890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:33:58.243539 containerd[1534]: time="2025-12-16T12:33:58.243494211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:33:58.243610 containerd[1534]: time="2025-12-16T12:33:58.243534171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:33:58.245318 kubelet[2676]: E1216 12:33:58.245264 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:33:58.248050 kubelet[2676]: E1216 12:33:58.247931 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:33:58.253993 kubelet[2676]: E1216 12:33:58.253924 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9d83fe66296a4a18b50d1b1d545f4ddb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:33:58.257792 containerd[1534]: time="2025-12-16T12:33:58.257290258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:33:58.496470 containerd[1534]: time="2025-12-16T12:33:58.496312457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:33:58.503647 containerd[1534]: time="2025-12-16T12:33:58.503581181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:33:58.503747 containerd[1534]: time="2025-12-16T12:33:58.503664461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:33:58.503904 kubelet[2676]: E1216 12:33:58.503840 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:33:58.503904 kubelet[2676]: E1216 12:33:58.503897 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:33:58.504083 kubelet[2676]: E1216 12:33:58.504031 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:33:58.505316 kubelet[2676]: E1216 12:33:58.505243 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:33:58.883819 kubelet[2676]: I1216 12:33:58.883453 2676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c344c36-ea3d-4229-86ca-f58b0ce00736" path="/var/lib/kubelet/pods/6c344c36-ea3d-4229-86ca-f58b0ce00736/volumes" Dec 16 12:33:58.995428 kubelet[2676]: E1216 12:33:58.995373 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:33:59.330459 systemd-networkd[1440]: cali2ce656647b1: Gained IPv6LL Dec 16 12:34:02.402290 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:34454.service - OpenSSH per-connection server daemon (10.0.0.1:34454). Dec 16 12:34:02.489519 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 34454 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:02.491042 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:02.495198 systemd-logind[1507]: New session 8 of user core. Dec 16 12:34:02.510352 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:34:02.646640 sshd[4097]: Connection closed by 10.0.0.1 port 34454 Dec 16 12:34:02.646954 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:02.650764 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:34454.service: Deactivated successfully. Dec 16 12:34:02.652781 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:34:02.653821 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:34:02.655090 systemd-logind[1507]: Removed session 8. Dec 16 12:34:02.881360 containerd[1534]: time="2025-12-16T12:34:02.881302771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqrqd,Uid:35872955-7dab-4f3b-a9ae-4b0261f37454,Namespace:kube-system,Attempt:0,}" Dec 16 12:34:02.998822 systemd-networkd[1440]: calic28cd1d4503: Link UP Dec 16 12:34:02.999015 systemd-networkd[1440]: calic28cd1d4503: Gained carrier Dec 16 12:34:03.014169 containerd[1534]: 2025-12-16 12:34:02.908 [INFO][4132] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:03.014169 containerd[1534]: 2025-12-16 12:34:02.923 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0 coredns-668d6bf9bc- kube-system 35872955-7dab-4f3b-a9ae-4b0261f37454 791 0 2025-12-16 12:33:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-sqrqd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic28cd1d4503 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-" Dec 16 12:34:03.014169 containerd[1534]: 2025-12-16 12:34:02.923 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014169 containerd[1534]: 2025-12-16 12:34:02.950 [INFO][4147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" HandleID="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Workload="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.950 [INFO][4147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" HandleID="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Workload="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3040), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-sqrqd", "timestamp":"2025-12-16 12:34:02.950318638 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.950 [INFO][4147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.950 [INFO][4147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.950 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.965 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" host="localhost" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.970 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.975 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.977 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.979 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:03.014415 containerd[1534]: 2025-12-16 12:34:02.979 [INFO][4147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" host="localhost" Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.981 [INFO][4147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3 Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.988 [INFO][4147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" host="localhost" Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.994 [INFO][4147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" host="localhost" Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.994 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" host="localhost" Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.994 [INFO][4147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:03.014606 containerd[1534]: 2025-12-16 12:34:02.994 [INFO][4147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" HandleID="k8s-pod-network.10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Workload="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014717 containerd[1534]: 2025-12-16 12:34:02.996 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35872955-7dab-4f3b-a9ae-4b0261f37454", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-sqrqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28cd1d4503", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:03.014771 containerd[1534]: 2025-12-16 12:34:02.996 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014771 containerd[1534]: 2025-12-16 12:34:02.996 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic28cd1d4503 ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014771 containerd[1534]: 2025-12-16 12:34:02.999 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.014829 containerd[1534]: 2025-12-16 12:34:03.001 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35872955-7dab-4f3b-a9ae-4b0261f37454", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3", Pod:"coredns-668d6bf9bc-sqrqd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28cd1d4503", MAC:"d2:69:5f:ac:5f:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:03.014829 containerd[1534]: 2025-12-16 12:34:03.012 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-sqrqd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--sqrqd-eth0" Dec 16 12:34:03.073823 containerd[1534]: time="2025-12-16T12:34:03.073686004Z" level=info msg="connecting to shim 10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3" address="unix:///run/containerd/s/16d175f086c50beb2977935dab2ca9da1905e1a744d92db7821ae37eec1c3b01" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:03.107352 systemd[1]: Started cri-containerd-10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3.scope - libcontainer container 10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3. Dec 16 12:34:03.118271 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:03.138517 containerd[1534]: time="2025-12-16T12:34:03.138477187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sqrqd,Uid:35872955-7dab-4f3b-a9ae-4b0261f37454,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3\"" Dec 16 12:34:03.141590 containerd[1534]: time="2025-12-16T12:34:03.141554708Z" level=info msg="CreateContainer within sandbox \"10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:34:03.159253 containerd[1534]: time="2025-12-16T12:34:03.159206955Z" level=info msg="Container c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:03.160321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount155424296.mount: Deactivated successfully. Dec 16 12:34:03.180777 containerd[1534]: time="2025-12-16T12:34:03.180704602Z" level=info msg="CreateContainer within sandbox \"10fd890b5b9d59a0612a7b2ab180c00f2e5807a0e89875e0b7457ea618b262d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521\"" Dec 16 12:34:03.182555 containerd[1534]: time="2025-12-16T12:34:03.182519363Z" level=info msg="StartContainer for \"c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521\"" Dec 16 12:34:03.184906 containerd[1534]: time="2025-12-16T12:34:03.183614683Z" level=info msg="connecting to shim c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521" address="unix:///run/containerd/s/16d175f086c50beb2977935dab2ca9da1905e1a744d92db7821ae37eec1c3b01" protocol=ttrpc version=3 Dec 16 12:34:03.209745 systemd[1]: Started cri-containerd-c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521.scope - libcontainer container c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521. Dec 16 12:34:03.245996 containerd[1534]: time="2025-12-16T12:34:03.245407386Z" level=info msg="StartContainer for \"c13ee6ec149b684ffa08954c63a977c4a7fee395883f13081467a1c47c33e521\" returns successfully" Dec 16 12:34:03.881384 containerd[1534]: time="2025-12-16T12:34:03.881330656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-tjvzk,Uid:a11b445e-3d05-4dcc-ad5f-d55a1cff6339,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:34:03.881904 containerd[1534]: time="2025-12-16T12:34:03.881856896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6877579798-hmh9z,Uid:d618d708-5752-42c7-bde6-a99eef3e5715,Namespace:calico-system,Attempt:0,}" Dec 16 12:34:03.882044 containerd[1534]: time="2025-12-16T12:34:03.881883056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vwtlw,Uid:d21198e2-9674-4db1-a87b-fd2588ce9583,Namespace:calico-system,Attempt:0,}" Dec 16 12:34:03.888863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639154148.mount: Deactivated successfully. Dec 16 12:34:04.043031 kubelet[2676]: I1216 12:34:04.042850 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sqrqd" podStartSLOduration=37.042829834 podStartE2EDuration="37.042829834s" podCreationTimestamp="2025-12-16 12:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:34:04.042530434 +0000 UTC m=+43.277184169" watchObservedRunningTime="2025-12-16 12:34:04.042829834 +0000 UTC m=+43.277483569" Dec 16 12:34:04.072555 systemd-networkd[1440]: cali9e61a282f02: Link UP Dec 16 12:34:04.073360 systemd-networkd[1440]: cali9e61a282f02: Gained carrier Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.922 [INFO][4269] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.944 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0 calico-apiserver-756cf9c5df- calico-apiserver a11b445e-3d05-4dcc-ad5f-d55a1cff6339 794 0 2025-12-16 12:33:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756cf9c5df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-756cf9c5df-tjvzk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e61a282f02 [] [] }} ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.944 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.991 [INFO][4315] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" HandleID="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Workload="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.991 [INFO][4315] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" HandleID="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Workload="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-756cf9c5df-tjvzk", "timestamp":"2025-12-16 12:34:03.991226856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.991 [INFO][4315] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.991 [INFO][4315] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:03.991 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.003 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.011 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.020 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.027 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.032 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.032 [INFO][4315] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.037 [INFO][4315] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9 Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.042 [INFO][4315] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.054 [INFO][4315] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.054 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" host="localhost" Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.055 [INFO][4315] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:04.087261 containerd[1534]: 2025-12-16 12:34:04.055 [INFO][4315] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" HandleID="k8s-pod-network.d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Workload="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.066 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0", GenerateName:"calico-apiserver-756cf9c5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"a11b445e-3d05-4dcc-ad5f-d55a1cff6339", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756cf9c5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-756cf9c5df-tjvzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e61a282f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.067 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.067 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e61a282f02 ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.073 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.074 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0", GenerateName:"calico-apiserver-756cf9c5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"a11b445e-3d05-4dcc-ad5f-d55a1cff6339", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756cf9c5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9", Pod:"calico-apiserver-756cf9c5df-tjvzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e61a282f02", MAC:"26:7c:58:59:f5:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.087898 containerd[1534]: 2025-12-16 12:34:04.085 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-tjvzk" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--tjvzk-eth0" Dec 16 12:34:04.110318 containerd[1534]: time="2025-12-16T12:34:04.110269497Z" level=info msg="connecting to shim d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9" address="unix:///run/containerd/s/e19674053784c059d272812602e883e729babc779c1322baa766da7ead8d14fe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:04.144360 systemd[1]: Started cri-containerd-d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9.scope - libcontainer container d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9. Dec 16 12:34:04.152032 systemd-networkd[1440]: calie733c439325: Link UP Dec 16 12:34:04.152568 systemd-networkd[1440]: calie733c439325: Gained carrier Dec 16 12:34:04.169669 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:03.941 [INFO][4281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:03.965 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0 calico-kube-controllers-6877579798- calico-system d618d708-5752-42c7-bde6-a99eef3e5715 785 0 2025-12-16 12:33:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6877579798 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6877579798-hmh9z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie733c439325 [] [] }} ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:03.965 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:03.999 [INFO][4322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" HandleID="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Workload="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.000 [INFO][4322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" HandleID="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Workload="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000134a40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6877579798-hmh9z", "timestamp":"2025-12-16 12:34:03.999952459 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.000 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.055 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.055 [INFO][4322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.105 [INFO][4322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.110 [INFO][4322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.120 [INFO][4322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.124 [INFO][4322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.128 [INFO][4322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.128 [INFO][4322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.130 [INFO][4322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.134 [INFO][4322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.143 [INFO][4322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.143 [INFO][4322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" host="localhost" Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.143 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:04.172212 containerd[1534]: 2025-12-16 12:34:04.144 [INFO][4322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" HandleID="k8s-pod-network.ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Workload="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.148 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0", GenerateName:"calico-kube-controllers-6877579798-", Namespace:"calico-system", SelfLink:"", UID:"d618d708-5752-42c7-bde6-a99eef3e5715", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6877579798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6877579798-hmh9z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie733c439325", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.148 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.148 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie733c439325 ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.150 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.151 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0", GenerateName:"calico-kube-controllers-6877579798-", Namespace:"calico-system", SelfLink:"", UID:"d618d708-5752-42c7-bde6-a99eef3e5715", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6877579798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a", Pod:"calico-kube-controllers-6877579798-hmh9z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie733c439325", MAC:"c2:87:5f:64:a0:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.172769 containerd[1534]: 2025-12-16 12:34:04.166 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" Namespace="calico-system" Pod="calico-kube-controllers-6877579798-hmh9z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6877579798--hmh9z-eth0" Dec 16 12:34:04.192756 containerd[1534]: time="2025-12-16T12:34:04.192713645Z" level=info msg="connecting to shim ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a" address="unix:///run/containerd/s/c2b6727af2639deb64c93ea6ab977e0c2b6fdd62ed18627f95e5e661f87b6cbc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:04.218569 containerd[1534]: time="2025-12-16T12:34:04.218530973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-tjvzk,Uid:a11b445e-3d05-4dcc-ad5f-d55a1cff6339,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d47f2cd34aaeab3adcce79b38b4f534f0c18fcfe4df1cc6a319878e99d1f6fd9\"" Dec 16 12:34:04.222149 containerd[1534]: time="2025-12-16T12:34:04.221874534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:04.222326 systemd[1]: Started cri-containerd-ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a.scope - libcontainer container ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a. Dec 16 12:34:04.240462 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:04.254583 systemd-networkd[1440]: cali6b4179bad7b: Link UP Dec 16 12:34:04.255560 systemd-networkd[1440]: cali6b4179bad7b: Gained carrier Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:03.949 [INFO][4299] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:03.968 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vwtlw-eth0 csi-node-driver- calico-system d21198e2-9674-4db1-a87b-fd2588ce9583 690 0 2025-12-16 12:33:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vwtlw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b4179bad7b [] [] }} ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:03.968 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.018 [INFO][4328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" HandleID="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Workload="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.020 [INFO][4328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" HandleID="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Workload="localhost-k8s-csi--node--driver--vwtlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vwtlw", "timestamp":"2025-12-16 12:34:04.018034025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.022 [INFO][4328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.143 [INFO][4328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.144 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.206 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.217 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.227 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.230 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.232 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.232 [INFO][4328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.234 [INFO][4328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2 Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.240 [INFO][4328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.246 [INFO][4328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.246 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" host="localhost" Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.246 [INFO][4328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:04.280397 containerd[1534]: 2025-12-16 12:34:04.246 [INFO][4328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" HandleID="k8s-pod-network.d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Workload="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.249 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vwtlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d21198e2-9674-4db1-a87b-fd2588ce9583", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vwtlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b4179bad7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.249 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.249 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b4179bad7b ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.255 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.259 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vwtlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d21198e2-9674-4db1-a87b-fd2588ce9583", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2", Pod:"csi-node-driver-vwtlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b4179bad7b", MAC:"a6:e5:c5:59:36:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:04.281920 containerd[1534]: 2025-12-16 12:34:04.274 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" Namespace="calico-system" Pod="csi-node-driver-vwtlw" WorkloadEndpoint="localhost-k8s-csi--node--driver--vwtlw-eth0" Dec 16 12:34:04.299767 containerd[1534]: time="2025-12-16T12:34:04.299656241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6877579798-hmh9z,Uid:d618d708-5752-42c7-bde6-a99eef3e5715,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac001b4593aa005946ccc8b824cd43878dcb8a4050146b231e1a759d9c5c0a3a\"" Dec 16 12:34:04.316340 containerd[1534]: time="2025-12-16T12:34:04.315821486Z" level=info msg="connecting to shim d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2" address="unix:///run/containerd/s/7518af96a1c3e21dbe6fc0dccf97a1fd9553478a9fa6745b7d2e8e462b1b9bb7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:04.347386 systemd[1]: Started cri-containerd-d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2.scope - libcontainer container d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2. Dec 16 12:34:04.368714 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:04.382990 containerd[1534]: time="2025-12-16T12:34:04.382950389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vwtlw,Uid:d21198e2-9674-4db1-a87b-fd2588ce9583,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8a0d1835ea5773f73a84bd4bb8eab3a8602ab9e98bc33b7c4bd4b72aad2acf2\"" Dec 16 12:34:04.441027 containerd[1534]: time="2025-12-16T12:34:04.440878649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:04.442397 containerd[1534]: time="2025-12-16T12:34:04.442286089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:04.442397 containerd[1534]: time="2025-12-16T12:34:04.442379369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:04.442558 kubelet[2676]: E1216 12:34:04.442498 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:04.442607 kubelet[2676]: E1216 12:34:04.442561 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:04.442910 kubelet[2676]: E1216 12:34:04.442824 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl5n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-tjvzk_calico-apiserver(a11b445e-3d05-4dcc-ad5f-d55a1cff6339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:04.443293 containerd[1534]: time="2025-12-16T12:34:04.443257890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:34:04.444030 kubelet[2676]: E1216 12:34:04.443986 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:04.450414 systemd-networkd[1440]: calic28cd1d4503: Gained IPv6LL Dec 16 12:34:04.655627 containerd[1534]: time="2025-12-16T12:34:04.655313602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:04.656431 containerd[1534]: time="2025-12-16T12:34:04.656386682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:34:04.656500 containerd[1534]: time="2025-12-16T12:34:04.656463642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:34:04.656669 kubelet[2676]: E1216 12:34:04.656632 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:04.656724 kubelet[2676]: E1216 12:34:04.656684 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:04.657227 kubelet[2676]: E1216 12:34:04.656914 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5drf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6877579798-hmh9z_calico-system(d618d708-5752-42c7-bde6-a99eef3e5715): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:04.657378 containerd[1534]: time="2025-12-16T12:34:04.657350642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:34:04.658707 kubelet[2676]: E1216 12:34:04.658673 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:04.873288 containerd[1534]: time="2025-12-16T12:34:04.873227316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:04.875585 containerd[1534]: time="2025-12-16T12:34:04.875530077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:34:04.875659 containerd[1534]: time="2025-12-16T12:34:04.875633277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:34:04.876059 kubelet[2676]: E1216 12:34:04.875804 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:04.876059 kubelet[2676]: E1216 12:34:04.875861 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:04.876059 kubelet[2676]: E1216 12:34:04.876002 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:04.878756 containerd[1534]: time="2025-12-16T12:34:04.878715718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:34:04.882757 containerd[1534]: time="2025-12-16T12:34:04.882724679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmw9j,Uid:d783d037-6b14-4dbb-b519-84eab1f1deab,Namespace:calico-system,Attempt:0,}" Dec 16 12:34:05.006400 systemd-networkd[1440]: cali0054bfaea03: Link UP Dec 16 12:34:05.007189 systemd-networkd[1440]: cali0054bfaea03: Gained carrier Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.921 [INFO][4533] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.938 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--pmw9j-eth0 goldmane-666569f655- calico-system d783d037-6b14-4dbb-b519-84eab1f1deab 792 0 2025-12-16 12:33:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-pmw9j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0054bfaea03 [] [] }} ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.938 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.962 [INFO][4542] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" HandleID="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Workload="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.962 [INFO][4542] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" HandleID="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Workload="localhost-k8s-goldmane--666569f655--pmw9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042e100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-pmw9j", "timestamp":"2025-12-16 12:34:04.962087466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.962 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.962 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.962 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.971 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.976 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.981 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.983 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.985 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.985 [INFO][4542] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.987 [INFO][4542] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2 Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.991 [INFO][4542] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.998 [INFO][4542] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.999 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" host="localhost" Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.999 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:05.019049 containerd[1534]: 2025-12-16 12:34:04.999 [INFO][4542] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" HandleID="k8s-pod-network.4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Workload="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.003 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pmw9j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d783d037-6b14-4dbb-b519-84eab1f1deab", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-pmw9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0054bfaea03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.003 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.003 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0054bfaea03 ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.006 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.006 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--pmw9j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d783d037-6b14-4dbb-b519-84eab1f1deab", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2", Pod:"goldmane-666569f655-pmw9j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0054bfaea03", MAC:"f6:e4:38:3e:a7:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:05.019580 containerd[1534]: 2025-12-16 12:34:05.016 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" Namespace="calico-system" Pod="goldmane-666569f655-pmw9j" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--pmw9j-eth0" Dec 16 12:34:05.023922 kubelet[2676]: E1216 12:34:05.023885 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:05.030632 kubelet[2676]: E1216 12:34:05.030538 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:05.050357 containerd[1534]: time="2025-12-16T12:34:05.050299775Z" level=info msg="connecting to shim 4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2" address="unix:///run/containerd/s/a2802568135b35d17b0baaf537d05a33ef8e367ecb29eaf8d1813eddecb5260f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:05.079316 systemd[1]: Started cri-containerd-4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2.scope - libcontainer container 4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2. Dec 16 12:34:05.093202 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:05.114486 containerd[1534]: time="2025-12-16T12:34:05.114375195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmw9j,Uid:d783d037-6b14-4dbb-b519-84eab1f1deab,Namespace:calico-system,Attempt:0,} returns sandbox id \"4452012364c02c55c1acd57ae1f2bc56b6838c61b6ec7adc128bbce9025e86f2\"" Dec 16 12:34:05.114805 containerd[1534]: time="2025-12-16T12:34:05.114780475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:05.115707 containerd[1534]: time="2025-12-16T12:34:05.115686996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:34:05.115778 containerd[1534]: time="2025-12-16T12:34:05.115741116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:34:05.116178 kubelet[2676]: E1216 12:34:05.116146 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:05.116432 kubelet[2676]: E1216 12:34:05.116187 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:05.116432 kubelet[2676]: E1216 12:34:05.116351 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:05.116909 containerd[1534]: time="2025-12-16T12:34:05.116710396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:34:05.118093 kubelet[2676]: E1216 12:34:05.118045 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:34:05.326201 containerd[1534]: time="2025-12-16T12:34:05.326159223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:05.327199 containerd[1534]: time="2025-12-16T12:34:05.327158383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:34:05.327382 containerd[1534]: time="2025-12-16T12:34:05.327232543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:05.327554 kubelet[2676]: E1216 12:34:05.327514 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:05.327606 kubelet[2676]: E1216 12:34:05.327563 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:05.327738 kubelet[2676]: E1216 12:34:05.327678 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk9g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmw9j_calico-system(d783d037-6b14-4dbb-b519-84eab1f1deab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:05.329047 kubelet[2676]: E1216 12:34:05.328986 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:05.602344 systemd-networkd[1440]: calie733c439325: Gained IPv6LL Dec 16 12:34:05.667754 systemd-networkd[1440]: cali9e61a282f02: Gained IPv6LL Dec 16 12:34:05.880942 containerd[1534]: time="2025-12-16T12:34:05.880836039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-g7rtb,Uid:147be4de-63f4-4902-ba20-f537cb8c893c,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:34:05.881262 containerd[1534]: time="2025-12-16T12:34:05.881225959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-62d7g,Uid:e6af8fb3-d90c-4caf-af4e-e9051879291a,Namespace:kube-system,Attempt:0,}" Dec 16 12:34:05.923338 systemd-networkd[1440]: cali6b4179bad7b: Gained IPv6LL Dec 16 12:34:06.021335 systemd-networkd[1440]: cali0151dffe637: Link UP Dec 16 12:34:06.022072 systemd-networkd[1440]: cali0151dffe637: Gained carrier Dec 16 12:34:06.033248 kubelet[2676]: E1216 12:34:06.033196 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:06.036190 kubelet[2676]: E1216 12:34:06.036114 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:06.036361 kubelet[2676]: E1216 12:34:06.036298 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:06.037718 kubelet[2676]: E1216 12:34:06.037668 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.925 [INFO][4634] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.945 [INFO][4634] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--62d7g-eth0 coredns-668d6bf9bc- kube-system e6af8fb3-d90c-4caf-af4e-e9051879291a 790 0 2025-12-16 12:33:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-62d7g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0151dffe637 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.946 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.973 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" HandleID="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Workload="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.973 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" HandleID="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Workload="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-62d7g", "timestamp":"2025-12-16 12:34:05.973329589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.973 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.973 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.973 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.985 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.990 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.995 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:05.997 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.000 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.000 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.001 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609 Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.010 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" host="localhost" Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:06.044944 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" HandleID="k8s-pod-network.34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Workload="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.019 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--62d7g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e6af8fb3-d90c-4caf-af4e-e9051879291a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-62d7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0151dffe637", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.019 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.019 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0151dffe637 ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.024 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.025 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--62d7g-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e6af8fb3-d90c-4caf-af4e-e9051879291a", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609", Pod:"coredns-668d6bf9bc-62d7g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0151dffe637", MAC:"fa:85:f3:a4:00:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:06.045839 containerd[1534]: 2025-12-16 12:34:06.040 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" Namespace="kube-system" Pod="coredns-668d6bf9bc-62d7g" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--62d7g-eth0" Dec 16 12:34:06.081251 containerd[1534]: time="2025-12-16T12:34:06.080113341Z" level=info msg="connecting to shim 34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609" address="unix:///run/containerd/s/cf4c72f3ecff23b5f1b679e49e945b4e2eeeb994b38da80b1475ad4f9a8cc437" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:06.112435 systemd[1]: Started cri-containerd-34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609.scope - libcontainer container 34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609. Dec 16 12:34:06.127661 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:06.146487 systemd-networkd[1440]: calib5d77c23a7b: Link UP Dec 16 12:34:06.146669 systemd-networkd[1440]: calib5d77c23a7b: Gained carrier Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.931 [INFO][4641] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.947 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0 calico-apiserver-756cf9c5df- calico-apiserver 147be4de-63f4-4902-ba20-f537cb8c893c 795 0 2025-12-16 12:33:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:756cf9c5df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-756cf9c5df-g7rtb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5d77c23a7b [] [] }} ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.947 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.977 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" HandleID="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Workload="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.977 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" HandleID="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Workload="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-756cf9c5df-g7rtb", "timestamp":"2025-12-16 12:34:05.97736175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:05.977 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.017 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.086 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.104 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.114 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.118 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.120 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.121 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.123 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.131 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.140 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.140 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" host="localhost" Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.140 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:34:06.165709 containerd[1534]: 2025-12-16 12:34:06.140 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" HandleID="k8s-pod-network.619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Workload="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.144 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0", GenerateName:"calico-apiserver-756cf9c5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"147be4de-63f4-4902-ba20-f537cb8c893c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756cf9c5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-756cf9c5df-g7rtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5d77c23a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.144 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.144 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5d77c23a7b ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.146 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.147 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0", GenerateName:"calico-apiserver-756cf9c5df-", Namespace:"calico-apiserver", SelfLink:"", UID:"147be4de-63f4-4902-ba20-f537cb8c893c", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 33, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"756cf9c5df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b", Pod:"calico-apiserver-756cf9c5df-g7rtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5d77c23a7b", MAC:"7a:02:28:b2:45:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:34:06.166555 containerd[1534]: 2025-12-16 12:34:06.162 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" Namespace="calico-apiserver" Pod="calico-apiserver-756cf9c5df-g7rtb" WorkloadEndpoint="localhost-k8s-calico--apiserver--756cf9c5df--g7rtb-eth0" Dec 16 12:34:06.169062 containerd[1534]: time="2025-12-16T12:34:06.168934288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-62d7g,Uid:e6af8fb3-d90c-4caf-af4e-e9051879291a,Namespace:kube-system,Attempt:0,} returns sandbox id \"34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609\"" Dec 16 12:34:06.172188 containerd[1534]: time="2025-12-16T12:34:06.172095009Z" level=info msg="CreateContainer within sandbox \"34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:34:06.191203 containerd[1534]: time="2025-12-16T12:34:06.191161334Z" level=info msg="Container fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:34:06.193735 containerd[1534]: time="2025-12-16T12:34:06.193678175Z" level=info msg="connecting to shim 619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b" address="unix:///run/containerd/s/87dfbd8b4811f8bcd5bc9c4b04a1e04534520a3203fd3703a4402f2d6b385ff6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:34:06.208969 containerd[1534]: time="2025-12-16T12:34:06.208885860Z" level=info msg="CreateContainer within sandbox \"34887df821418b22d5642cecd7fa805093783469630a07ee9ca1d4c4380a6609\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9\"" Dec 16 12:34:06.210252 containerd[1534]: time="2025-12-16T12:34:06.209717020Z" level=info msg="StartContainer for \"fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9\"" Dec 16 12:34:06.212745 containerd[1534]: time="2025-12-16T12:34:06.212243941Z" level=info msg="connecting to shim fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9" address="unix:///run/containerd/s/cf4c72f3ecff23b5f1b679e49e945b4e2eeeb994b38da80b1475ad4f9a8cc437" protocol=ttrpc version=3 Dec 16 12:34:06.222329 systemd[1]: Started cri-containerd-619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b.scope - libcontainer container 619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b. Dec 16 12:34:06.231827 systemd[1]: Started cri-containerd-fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9.scope - libcontainer container fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9. Dec 16 12:34:06.239475 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:34:06.272161 containerd[1534]: time="2025-12-16T12:34:06.272088518Z" level=info msg="StartContainer for \"fc1e303f5f23868a7dda4ecc69a00b09a918aaac03e9938d5d4aef3bfe504ac9\" returns successfully" Dec 16 12:34:06.282069 containerd[1534]: time="2025-12-16T12:34:06.281897241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-756cf9c5df-g7rtb,Uid:147be4de-63f4-4902-ba20-f537cb8c893c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"619690456d0cacb5baf01e8baf7110e2d536d0b7ec59cd41926215eb331e2f7b\"" Dec 16 12:34:06.285155 containerd[1534]: time="2025-12-16T12:34:06.284244562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:06.294912 kubelet[2676]: I1216 12:34:06.294854 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:34:06.434365 systemd-networkd[1440]: cali0054bfaea03: Gained IPv6LL Dec 16 12:34:06.503999 containerd[1534]: time="2025-12-16T12:34:06.503957108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:06.510109 containerd[1534]: time="2025-12-16T12:34:06.509948829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:06.510109 containerd[1534]: time="2025-12-16T12:34:06.509953549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:06.511600 kubelet[2676]: E1216 12:34:06.510302 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:06.511600 kubelet[2676]: E1216 12:34:06.510359 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:06.511600 kubelet[2676]: E1216 12:34:06.510520 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxnrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-g7rtb_calico-apiserver(147be4de-63f4-4902-ba20-f537cb8c893c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:06.511841 kubelet[2676]: E1216 12:34:06.511806 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:07.036062 kubelet[2676]: E1216 12:34:07.035893 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:07.039156 kubelet[2676]: E1216 12:34:07.038836 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:07.073563 kubelet[2676]: I1216 12:34:07.071626 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-62d7g" podStartSLOduration=40.071607836 podStartE2EDuration="40.071607836s" podCreationTimestamp="2025-12-16 12:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:34:07.070207315 +0000 UTC m=+46.304861050" watchObservedRunningTime="2025-12-16 12:34:07.071607836 +0000 UTC m=+46.306261531" Dec 16 12:34:07.074369 systemd-networkd[1440]: cali0151dffe637: Gained IPv6LL Dec 16 12:34:07.538503 systemd-networkd[1440]: vxlan.calico: Link UP Dec 16 12:34:07.538511 systemd-networkd[1440]: vxlan.calico: Gained carrier Dec 16 12:34:07.661280 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:34466.service - OpenSSH per-connection server daemon (10.0.0.1:34466). Dec 16 12:34:07.742366 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 34466 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:07.744782 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:07.755607 systemd-logind[1507]: New session 9 of user core. Dec 16 12:34:07.761319 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:34:07.930344 sshd[4968]: Connection closed by 10.0.0.1 port 34466 Dec 16 12:34:07.930637 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:07.934033 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:34466.service: Deactivated successfully. Dec 16 12:34:07.936526 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:34:07.938317 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:34:07.940334 systemd-logind[1507]: Removed session 9. Dec 16 12:34:08.041849 kubelet[2676]: E1216 12:34:08.041769 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:08.099287 systemd-networkd[1440]: calib5d77c23a7b: Gained IPv6LL Dec 16 12:34:09.058326 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Dec 16 12:34:09.882441 containerd[1534]: time="2025-12-16T12:34:09.881883215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:34:10.104728 containerd[1534]: time="2025-12-16T12:34:10.104395028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:10.107214 containerd[1534]: time="2025-12-16T12:34:10.107041669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:34:10.107214 containerd[1534]: time="2025-12-16T12:34:10.107062669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:34:10.108186 kubelet[2676]: E1216 12:34:10.108064 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:34:10.108514 kubelet[2676]: E1216 12:34:10.108194 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:34:10.108514 kubelet[2676]: E1216 12:34:10.108332 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9d83fe66296a4a18b50d1b1d545f4ddb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:10.112419 containerd[1534]: time="2025-12-16T12:34:10.112363510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:34:10.334898 containerd[1534]: time="2025-12-16T12:34:10.334529841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:10.335764 containerd[1534]: time="2025-12-16T12:34:10.335715641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:34:10.336159 containerd[1534]: time="2025-12-16T12:34:10.335778682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:34:10.336468 kubelet[2676]: E1216 12:34:10.336415 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:34:10.336573 kubelet[2676]: E1216 12:34:10.336468 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:34:10.336627 kubelet[2676]: E1216 12:34:10.336579 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:10.338781 kubelet[2676]: E1216 12:34:10.338678 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:34:12.947309 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:60824.service - OpenSSH per-connection server daemon (10.0.0.1:60824). Dec 16 12:34:13.011702 sshd[5000]: Accepted publickey for core from 10.0.0.1 port 60824 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:13.013694 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:13.019108 systemd-logind[1507]: New session 10 of user core. Dec 16 12:34:13.029508 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:34:13.100125 kubelet[2676]: I1216 12:34:13.100057 2676 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:34:13.227780 sshd[5003]: Connection closed by 10.0.0.1 port 60824 Dec 16 12:34:13.228302 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:13.240381 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:60824.service: Deactivated successfully. Dec 16 12:34:13.243360 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:34:13.246542 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:34:13.250518 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:60838.service - OpenSSH per-connection server daemon (10.0.0.1:60838). Dec 16 12:34:13.252716 systemd-logind[1507]: Removed session 10. Dec 16 12:34:13.335825 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 60838 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:13.338590 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:13.344770 systemd-logind[1507]: New session 11 of user core. Dec 16 12:34:13.357932 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:34:13.550010 sshd[5071]: Connection closed by 10.0.0.1 port 60838 Dec 16 12:34:13.551386 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:13.563423 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:60838.service: Deactivated successfully. Dec 16 12:34:13.566474 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:34:13.572244 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:34:13.575951 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:60842.service - OpenSSH per-connection server daemon (10.0.0.1:60842). Dec 16 12:34:13.580675 systemd-logind[1507]: Removed session 11. Dec 16 12:34:13.632264 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 60842 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:13.633735 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:13.638974 systemd-logind[1507]: New session 12 of user core. Dec 16 12:34:13.650402 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:34:13.812840 sshd[5085]: Connection closed by 10.0.0.1 port 60842 Dec 16 12:34:13.813481 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:13.817936 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:60842.service: Deactivated successfully. Dec 16 12:34:13.822303 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:34:13.823898 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:34:13.826509 systemd-logind[1507]: Removed session 12. Dec 16 12:34:16.882172 containerd[1534]: time="2025-12-16T12:34:16.881981047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:17.103687 containerd[1534]: time="2025-12-16T12:34:17.103635761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:17.105269 containerd[1534]: time="2025-12-16T12:34:17.105203681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:17.105361 containerd[1534]: time="2025-12-16T12:34:17.105270081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:17.105876 kubelet[2676]: E1216 12:34:17.105568 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:17.105876 kubelet[2676]: E1216 12:34:17.105666 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:17.109018 kubelet[2676]: E1216 12:34:17.108603 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl5n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-tjvzk_calico-apiserver(a11b445e-3d05-4dcc-ad5f-d55a1cff6339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:17.111168 kubelet[2676]: E1216 12:34:17.109828 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:17.882019 containerd[1534]: time="2025-12-16T12:34:17.881932595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:34:18.069258 containerd[1534]: time="2025-12-16T12:34:18.069190382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:18.079088 containerd[1534]: time="2025-12-16T12:34:18.078975303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:34:18.079088 containerd[1534]: time="2025-12-16T12:34:18.079031263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:18.079379 kubelet[2676]: E1216 12:34:18.079233 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:18.079379 kubelet[2676]: E1216 12:34:18.079284 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:18.079538 kubelet[2676]: E1216 12:34:18.079493 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk9g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmw9j_calico-system(d783d037-6b14-4dbb-b519-84eab1f1deab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:18.080828 kubelet[2676]: E1216 12:34:18.080776 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:18.834691 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:60850.service - OpenSSH per-connection server daemon (10.0.0.1:60850). Dec 16 12:34:18.923154 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 60850 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:18.922794 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:18.929146 systemd-logind[1507]: New session 13 of user core. Dec 16 12:34:18.941391 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:34:19.084649 sshd[5113]: Connection closed by 10.0.0.1 port 60850 Dec 16 12:34:19.085045 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:19.089459 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:60850.service: Deactivated successfully. Dec 16 12:34:19.093891 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:34:19.095489 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:34:19.096962 systemd-logind[1507]: Removed session 13. Dec 16 12:34:19.882457 containerd[1534]: time="2025-12-16T12:34:19.882392344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:34:20.098762 containerd[1534]: time="2025-12-16T12:34:20.098633491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:20.099809 containerd[1534]: time="2025-12-16T12:34:20.099748731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:34:20.099809 containerd[1534]: time="2025-12-16T12:34:20.099833731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:34:20.100033 kubelet[2676]: E1216 12:34:20.099984 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:20.100652 kubelet[2676]: E1216 12:34:20.100053 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:20.100652 kubelet[2676]: E1216 12:34:20.100312 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5drf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6877579798-hmh9z_calico-system(d618d708-5752-42c7-bde6-a99eef3e5715): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:20.100836 containerd[1534]: time="2025-12-16T12:34:20.100764891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:20.102150 kubelet[2676]: E1216 12:34:20.102076 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:20.327376 containerd[1534]: time="2025-12-16T12:34:20.327306558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:20.328180 containerd[1534]: time="2025-12-16T12:34:20.328120838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:20.328247 containerd[1534]: time="2025-12-16T12:34:20.328227798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:20.328506 kubelet[2676]: E1216 12:34:20.328366 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:20.328562 kubelet[2676]: E1216 12:34:20.328520 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:20.329078 kubelet[2676]: E1216 12:34:20.328650 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxnrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-g7rtb_calico-apiserver(147be4de-63f4-4902-ba20-f537cb8c893c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:20.329844 kubelet[2676]: E1216 12:34:20.329809 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:20.883637 containerd[1534]: time="2025-12-16T12:34:20.883501986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:34:21.101599 containerd[1534]: time="2025-12-16T12:34:21.101530571Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:21.102488 containerd[1534]: time="2025-12-16T12:34:21.102446291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:34:21.102563 containerd[1534]: time="2025-12-16T12:34:21.102527811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:34:21.102693 kubelet[2676]: E1216 12:34:21.102650 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:21.102957 kubelet[2676]: E1216 12:34:21.102707 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:21.102957 kubelet[2676]: E1216 12:34:21.102818 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:21.104700 containerd[1534]: time="2025-12-16T12:34:21.104663972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:34:21.315728 containerd[1534]: time="2025-12-16T12:34:21.315660556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:21.317247 containerd[1534]: time="2025-12-16T12:34:21.317194556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:34:21.317352 containerd[1534]: time="2025-12-16T12:34:21.317259116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:34:21.317427 kubelet[2676]: E1216 12:34:21.317386 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:21.317483 kubelet[2676]: E1216 12:34:21.317437 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:21.317584 kubelet[2676]: E1216 12:34:21.317548 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:21.319669 kubelet[2676]: E1216 12:34:21.319613 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:34:24.102298 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:55762.service - OpenSSH per-connection server daemon (10.0.0.1:55762). Dec 16 12:34:24.176945 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 55762 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:24.179319 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:24.196183 systemd-logind[1507]: New session 14 of user core. Dec 16 12:34:24.202349 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:34:24.348987 sshd[5132]: Connection closed by 10.0.0.1 port 55762 Dec 16 12:34:24.349353 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:24.353315 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:55762.service: Deactivated successfully. Dec 16 12:34:24.356762 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:34:24.359493 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:34:24.361935 systemd-logind[1507]: Removed session 14. Dec 16 12:34:24.882121 kubelet[2676]: E1216 12:34:24.882023 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:34:29.361363 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:55764.service - OpenSSH per-connection server daemon (10.0.0.1:55764). Dec 16 12:34:29.440771 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 55764 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:29.442382 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:29.446907 systemd-logind[1507]: New session 15 of user core. Dec 16 12:34:29.454363 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:34:29.585104 sshd[5158]: Connection closed by 10.0.0.1 port 55764 Dec 16 12:34:29.585956 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:29.600484 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:55764.service: Deactivated successfully. Dec 16 12:34:29.603946 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:34:29.604781 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:34:29.607827 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:55774.service - OpenSSH per-connection server daemon (10.0.0.1:55774). Dec 16 12:34:29.608602 systemd-logind[1507]: Removed session 15. Dec 16 12:34:29.673715 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 55774 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:29.677474 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:29.688660 systemd-logind[1507]: New session 16 of user core. Dec 16 12:34:29.697340 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:34:29.921521 sshd[5174]: Connection closed by 10.0.0.1 port 55774 Dec 16 12:34:29.922809 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:29.935622 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:55774.service: Deactivated successfully. Dec 16 12:34:29.937482 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:34:29.938399 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:34:29.942009 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Dec 16 12:34:29.942913 systemd-logind[1507]: Removed session 16. Dec 16 12:34:30.006622 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:30.008289 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:30.013093 systemd-logind[1507]: New session 17 of user core. Dec 16 12:34:30.019327 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:34:30.623226 sshd[5189]: Connection closed by 10.0.0.1 port 55780 Dec 16 12:34:30.622928 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:30.633903 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:55780.service: Deactivated successfully. Dec 16 12:34:30.637873 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:34:30.638904 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:34:30.643502 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:55782.service - OpenSSH per-connection server daemon (10.0.0.1:55782). Dec 16 12:34:30.645080 systemd-logind[1507]: Removed session 17. Dec 16 12:34:30.708693 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 55782 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:30.710244 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:30.715716 systemd-logind[1507]: New session 18 of user core. Dec 16 12:34:30.726377 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:34:30.896108 kubelet[2676]: E1216 12:34:30.895993 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:30.897165 kubelet[2676]: E1216 12:34:30.897043 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:31.047519 sshd[5218]: Connection closed by 10.0.0.1 port 55782 Dec 16 12:34:31.047875 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:31.057967 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:55782.service: Deactivated successfully. Dec 16 12:34:31.060105 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:34:31.063368 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:34:31.067421 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Dec 16 12:34:31.069330 systemd-logind[1507]: Removed session 18. Dec 16 12:34:31.138737 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:31.140417 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:31.145176 systemd-logind[1507]: New session 19 of user core. Dec 16 12:34:31.152372 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:34:31.298975 sshd[5232]: Connection closed by 10.0.0.1 port 43190 Dec 16 12:34:31.299725 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:31.302753 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:43190.service: Deactivated successfully. Dec 16 12:34:31.306006 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:34:31.308237 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:34:31.310009 systemd-logind[1507]: Removed session 19. Dec 16 12:34:31.881198 kubelet[2676]: E1216 12:34:31.881002 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:32.883244 kubelet[2676]: E1216 12:34:32.883193 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:34:34.885086 kubelet[2676]: E1216 12:34:34.885026 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:36.310877 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:43202.service - OpenSSH per-connection server daemon (10.0.0.1:43202). Dec 16 12:34:36.380002 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 43202 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:36.381533 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:36.389784 systemd-logind[1507]: New session 20 of user core. Dec 16 12:34:36.395337 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:34:36.551484 sshd[5254]: Connection closed by 10.0.0.1 port 43202 Dec 16 12:34:36.551834 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:36.555652 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:34:36.555837 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:43202.service: Deactivated successfully. Dec 16 12:34:36.559003 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:34:36.560833 systemd-logind[1507]: Removed session 20. Dec 16 12:34:38.886304 containerd[1534]: time="2025-12-16T12:34:38.885884427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:34:39.129703 containerd[1534]: time="2025-12-16T12:34:39.129648524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:39.130727 containerd[1534]: time="2025-12-16T12:34:39.130662250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:34:39.130783 containerd[1534]: time="2025-12-16T12:34:39.130759971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:34:39.130999 kubelet[2676]: E1216 12:34:39.130940 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:34:39.131324 kubelet[2676]: E1216 12:34:39.131002 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:34:39.131324 kubelet[2676]: E1216 12:34:39.131147 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9d83fe66296a4a18b50d1b1d545f4ddb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:39.134327 containerd[1534]: time="2025-12-16T12:34:39.134291073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:34:39.363224 containerd[1534]: time="2025-12-16T12:34:39.363164378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:39.364265 containerd[1534]: time="2025-12-16T12:34:39.364206624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:34:39.364952 containerd[1534]: time="2025-12-16T12:34:39.364289945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:34:39.365038 kubelet[2676]: E1216 12:34:39.364430 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:34:39.365038 kubelet[2676]: E1216 12:34:39.364485 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:34:39.365038 kubelet[2676]: E1216 12:34:39.364589 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtvb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-545c7bf659-q24lc_calico-system(3fab4643-a193-458b-8824-ad84f0104f4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:39.366158 kubelet[2676]: E1216 12:34:39.366078 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:34:41.565541 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:59630.service - OpenSSH per-connection server daemon (10.0.0.1:59630). Dec 16 12:34:41.636084 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 59630 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:41.637711 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:41.643505 systemd-logind[1507]: New session 21 of user core. Dec 16 12:34:41.650371 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:34:41.800911 sshd[5270]: Connection closed by 10.0.0.1 port 59630 Dec 16 12:34:41.801362 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:41.805685 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:59630.service: Deactivated successfully. Dec 16 12:34:41.807598 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:34:41.808313 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:34:41.809806 systemd-logind[1507]: Removed session 21. Dec 16 12:34:42.882272 containerd[1534]: time="2025-12-16T12:34:42.881833310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:34:43.121313 containerd[1534]: time="2025-12-16T12:34:43.121260150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:43.122351 containerd[1534]: time="2025-12-16T12:34:43.122288076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:34:43.122419 containerd[1534]: time="2025-12-16T12:34:43.122357596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:43.122823 kubelet[2676]: E1216 12:34:43.122560 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:43.122823 kubelet[2676]: E1216 12:34:43.122633 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:34:43.122823 kubelet[2676]: E1216 12:34:43.122770 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk9g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmw9j_calico-system(d783d037-6b14-4dbb-b519-84eab1f1deab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:43.125359 kubelet[2676]: E1216 12:34:43.125315 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmw9j" podUID="d783d037-6b14-4dbb-b519-84eab1f1deab" Dec 16 12:34:43.888236 containerd[1534]: time="2025-12-16T12:34:43.888008450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:44.119615 containerd[1534]: time="2025-12-16T12:34:44.119560332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:44.156647 containerd[1534]: time="2025-12-16T12:34:44.156502134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:44.156647 containerd[1534]: time="2025-12-16T12:34:44.156563814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:44.156803 kubelet[2676]: E1216 12:34:44.156736 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:44.156803 kubelet[2676]: E1216 12:34:44.156789 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:44.159260 kubelet[2676]: E1216 12:34:44.156918 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl5n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-tjvzk_calico-apiserver(a11b445e-3d05-4dcc-ad5f-d55a1cff6339): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:44.159260 kubelet[2676]: E1216 12:34:44.158111 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-tjvzk" podUID="a11b445e-3d05-4dcc-ad5f-d55a1cff6339" Dec 16 12:34:44.882395 containerd[1534]: time="2025-12-16T12:34:44.882114099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:34:45.093594 containerd[1534]: time="2025-12-16T12:34:45.093535282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:45.121585 containerd[1534]: time="2025-12-16T12:34:45.121521352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:34:45.121585 containerd[1534]: time="2025-12-16T12:34:45.121555752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:34:45.121847 kubelet[2676]: E1216 12:34:45.121792 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:45.121898 kubelet[2676]: E1216 12:34:45.121848 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:34:45.122064 kubelet[2676]: E1216 12:34:45.121983 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxnrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-756cf9c5df-g7rtb_calico-apiserver(147be4de-63f4-4902-ba20-f537cb8c893c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:45.123300 kubelet[2676]: E1216 12:34:45.123243 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-756cf9c5df-g7rtb" podUID="147be4de-63f4-4902-ba20-f537cb8c893c" Dec 16 12:34:46.812900 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:59640.service - OpenSSH per-connection server daemon (10.0.0.1:59640). Dec 16 12:34:46.881560 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 59640 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:46.884309 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:46.889481 systemd-logind[1507]: New session 22 of user core. Dec 16 12:34:46.899366 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:34:47.063708 sshd[5313]: Connection closed by 10.0.0.1 port 59640 Dec 16 12:34:47.064334 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:47.067670 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:59640.service: Deactivated successfully. Dec 16 12:34:47.070001 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:34:47.070755 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:34:47.072116 systemd-logind[1507]: Removed session 22. Dec 16 12:34:47.883294 containerd[1534]: time="2025-12-16T12:34:47.883246537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:34:48.073972 containerd[1534]: time="2025-12-16T12:34:48.073918134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:48.075060 containerd[1534]: time="2025-12-16T12:34:48.074990459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:34:48.075060 containerd[1534]: time="2025-12-16T12:34:48.075038980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:34:48.075339 kubelet[2676]: E1216 12:34:48.075243 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:48.075639 kubelet[2676]: E1216 12:34:48.075345 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:34:48.075639 kubelet[2676]: E1216 12:34:48.075467 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:48.077703 containerd[1534]: time="2025-12-16T12:34:48.077646312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:34:48.278865 containerd[1534]: time="2025-12-16T12:34:48.278808986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:48.279770 containerd[1534]: time="2025-12-16T12:34:48.279732070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:34:48.279898 containerd[1534]: time="2025-12-16T12:34:48.279802311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:34:48.280203 kubelet[2676]: E1216 12:34:48.280083 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:48.280203 kubelet[2676]: E1216 12:34:48.280186 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:34:48.280791 kubelet[2676]: E1216 12:34:48.280744 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7bms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vwtlw_calico-system(d21198e2-9674-4db1-a87b-fd2588ce9583): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:48.282409 kubelet[2676]: E1216 12:34:48.282315 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vwtlw" podUID="d21198e2-9674-4db1-a87b-fd2588ce9583" Dec 16 12:34:49.882206 containerd[1534]: time="2025-12-16T12:34:49.881923315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:34:50.138223 containerd[1534]: time="2025-12-16T12:34:50.138056252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:34:50.150165 containerd[1534]: time="2025-12-16T12:34:50.150054189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:34:50.150165 containerd[1534]: time="2025-12-16T12:34:50.150119989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:34:50.151393 kubelet[2676]: E1216 12:34:50.151331 2676 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:50.151393 kubelet[2676]: E1216 12:34:50.151389 2676 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:34:50.151690 kubelet[2676]: E1216 12:34:50.151515 2676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5drf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6877579798-hmh9z_calico-system(d618d708-5752-42c7-bde6-a99eef3e5715): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:34:50.153030 kubelet[2676]: E1216 12:34:50.152977 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6877579798-hmh9z" podUID="d618d708-5752-42c7-bde6-a99eef3e5715" Dec 16 12:34:50.884616 kubelet[2676]: E1216 12:34:50.884511 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-545c7bf659-q24lc" podUID="3fab4643-a193-458b-8824-ad84f0104f4a" Dec 16 12:34:52.073839 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:49342.service - OpenSSH per-connection server daemon (10.0.0.1:49342). Dec 16 12:34:52.144866 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 49342 ssh2: RSA SHA256:BaSANVIxG0UVtpwpaUGngK+MAJAznN//djAQgRKnLS8 Dec 16 12:34:52.146609 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:34:52.150825 systemd-logind[1507]: New session 23 of user core. Dec 16 12:34:52.161369 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:34:52.295180 sshd[5339]: Connection closed by 10.0.0.1 port 49342 Dec 16 12:34:52.295530 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Dec 16 12:34:52.299232 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:49342.service: Deactivated successfully. Dec 16 12:34:52.301188 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:34:52.302008 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:34:52.303646 systemd-logind[1507]: Removed session 23.