Dec 16 12:36:47.759231 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:36:47.759254 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:36:47.759263 kernel: KASLR enabled Dec 16 12:36:47.759269 kernel: efi: EFI v2.7 by EDK II Dec 16 12:36:47.759275 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 16 12:36:47.759280 kernel: random: crng init done Dec 16 12:36:47.759287 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 12:36:47.759293 kernel: secureboot: Secure boot enabled Dec 16 12:36:47.759298 kernel: ACPI: Early table checksum verification disabled Dec 16 12:36:47.759306 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 16 12:36:47.759312 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:36:47.759318 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759323 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759329 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759337 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759344 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759350 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759356 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759363 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759369 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:36:47.759375 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 12:36:47.759381 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:36:47.759387 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:36:47.759393 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 16 12:36:47.759399 kernel: Zone ranges: Dec 16 12:36:47.759406 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:36:47.759412 kernel: DMA32 empty Dec 16 12:36:47.759418 kernel: Normal empty Dec 16 12:36:47.759424 kernel: Device empty Dec 16 12:36:47.759430 kernel: Movable zone start for each node Dec 16 12:36:47.759436 kernel: Early memory node ranges Dec 16 12:36:47.759442 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 16 12:36:47.759448 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 16 12:36:47.759454 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 16 12:36:47.759460 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 16 12:36:47.759466 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 16 12:36:47.759472 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 16 12:36:47.759479 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 16 12:36:47.759485 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 16 12:36:47.759491 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 12:36:47.759500 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 12:36:47.759506 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 12:36:47.759513 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 16 12:36:47.759519 kernel: psci: probing for conduit method from ACPI. Dec 16 12:36:47.759527 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:36:47.759534 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:36:47.759540 kernel: psci: Trusted OS migration not required Dec 16 12:36:47.759546 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:36:47.759553 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:36:47.759559 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:36:47.759566 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:36:47.759573 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 12:36:47.759579 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:36:47.759587 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:36:47.759593 kernel: CPU features: detected: Spectre-v4 Dec 16 12:36:47.759599 kernel: CPU features: detected: Spectre-BHB Dec 16 12:36:47.759606 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:36:47.759613 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:36:47.759619 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:36:47.759626 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:36:47.759632 kernel: alternatives: applying boot alternatives Dec 16 12:36:47.759639 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:36:47.759646 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:36:47.759662 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:36:47.759671 kernel: Fallback order for Node 0: 0 Dec 16 12:36:47.759678 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 12:36:47.759684 kernel: Policy zone: DMA Dec 16 12:36:47.759691 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:36:47.759697 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 12:36:47.759703 kernel: software IO TLB: area num 4. Dec 16 12:36:47.759710 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 12:36:47.759716 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 16 12:36:47.759723 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:36:47.759729 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:36:47.759736 kernel: rcu: RCU event tracing is enabled. Dec 16 12:36:47.759743 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:36:47.759751 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:36:47.759757 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:36:47.759764 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:36:47.759770 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:36:47.759777 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:36:47.759783 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:36:47.759790 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:36:47.759797 kernel: GICv3: 256 SPIs implemented Dec 16 12:36:47.759803 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:36:47.759809 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:36:47.759853 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:36:47.759863 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:36:47.759873 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:36:47.759879 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:36:47.759886 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:36:47.759893 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:36:47.759899 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 12:36:47.759906 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 12:36:47.759912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:36:47.759919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:36:47.759925 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:36:47.759932 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:36:47.759938 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:36:47.759946 kernel: arm-pv: using stolen time PV Dec 16 12:36:47.759953 kernel: Console: colour dummy device 80x25 Dec 16 12:36:47.759960 kernel: ACPI: Core revision 20240827 Dec 16 12:36:47.759967 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:36:47.759974 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:36:47.759981 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:36:47.759988 kernel: landlock: Up and running. Dec 16 12:36:47.759995 kernel: SELinux: Initializing. Dec 16 12:36:47.760001 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:36:47.760009 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:36:47.760016 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:36:47.760023 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:36:47.760030 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:36:47.760037 kernel: Remapping and enabling EFI services. Dec 16 12:36:47.760043 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:36:47.760050 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:36:47.760057 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:36:47.760063 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 12:36:47.760072 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:36:47.760083 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:36:47.760090 kernel: Detected PIPT I-cache on CPU2 Dec 16 12:36:47.760099 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 12:36:47.760106 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 12:36:47.760113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:36:47.760120 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 12:36:47.760127 kernel: Detected PIPT I-cache on CPU3 Dec 16 12:36:47.760135 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 12:36:47.760143 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 12:36:47.760150 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:36:47.760156 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 12:36:47.760163 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:36:47.760170 kernel: SMP: Total of 4 processors activated. Dec 16 12:36:47.760177 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:36:47.760184 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:36:47.760191 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:36:47.760198 kernel: CPU features: detected: Common not Private translations Dec 16 12:36:47.760207 kernel: CPU features: detected: CRC32 instructions Dec 16 12:36:47.760214 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:36:47.760221 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:36:47.760228 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:36:47.760235 kernel: CPU features: detected: Privileged Access Never Dec 16 12:36:47.760242 kernel: CPU features: detected: RAS Extension Support Dec 16 12:36:47.760249 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:36:47.760256 kernel: alternatives: applying system-wide alternatives Dec 16 12:36:47.760263 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 12:36:47.760272 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 16 12:36:47.760279 kernel: devtmpfs: initialized Dec 16 12:36:47.760286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:36:47.760294 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:36:47.760301 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:36:47.760307 kernel: 0 pages in range for non-PLT usage Dec 16 12:36:47.760315 kernel: 508400 pages in range for PLT usage Dec 16 12:36:47.760322 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:36:47.760329 kernel: SMBIOS 3.0.0 present. Dec 16 12:36:47.760337 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 12:36:47.760345 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:36:47.760351 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:36:47.760359 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:36:47.760366 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:36:47.760373 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:36:47.760380 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:36:47.760387 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Dec 16 12:36:47.760394 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:36:47.760403 kernel: cpuidle: using governor menu Dec 16 12:36:47.760410 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:36:47.760417 kernel: ASID allocator initialised with 32768 entries Dec 16 12:36:47.760424 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:36:47.760431 kernel: Serial: AMBA PL011 UART driver Dec 16 12:36:47.760438 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:36:47.760445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:36:47.760452 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:36:47.760459 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:36:47.760467 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:36:47.760474 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:36:47.760482 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:36:47.760488 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:36:47.760495 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:36:47.760502 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:36:47.760509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:36:47.760516 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:36:47.760523 kernel: ACPI: Interpreter enabled Dec 16 12:36:47.760532 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:36:47.760539 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:36:47.760546 kernel: ACPI: CPU0 has been hot-added Dec 16 12:36:47.760553 kernel: ACPI: CPU1 has been hot-added Dec 16 12:36:47.760560 kernel: ACPI: CPU2 has been hot-added Dec 16 12:36:47.760567 kernel: ACPI: CPU3 has been hot-added Dec 16 12:36:47.760574 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:36:47.760581 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:36:47.760588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:36:47.760744 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:36:47.760811 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:36:47.760891 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:36:47.761001 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:36:47.761080 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:36:47.761090 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:36:47.761097 kernel: PCI host bridge to bus 0000:00 Dec 16 12:36:47.761172 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:36:47.761227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:36:47.761280 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:36:47.761331 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:36:47.761412 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:36:47.761489 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:36:47.761553 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 12:36:47.761613 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 12:36:47.761686 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:36:47.761748 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:36:47.761807 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 12:36:47.761912 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 12:36:47.761971 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:36:47.762028 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:36:47.762181 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:36:47.762193 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:36:47.762200 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:36:47.762207 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:36:47.762214 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:36:47.762222 kernel: iommu: Default domain type: Translated Dec 16 12:36:47.762229 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:36:47.762236 kernel: efivars: Registered efivars operations Dec 16 12:36:47.762247 kernel: vgaarb: loaded Dec 16 12:36:47.762254 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:36:47.762261 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:36:47.762269 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:36:47.762276 kernel: pnp: PnP ACPI init Dec 16 12:36:47.762350 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:36:47.762360 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:36:47.762368 kernel: NET: Registered PF_INET protocol family Dec 16 12:36:47.762378 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:36:47.762385 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:36:47.762392 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:36:47.762400 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:36:47.762407 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:36:47.762414 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:36:47.762421 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:36:47.762429 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:36:47.762436 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:36:47.762445 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:36:47.762452 kernel: kvm [1]: HYP mode not available Dec 16 12:36:47.762459 kernel: Initialise system trusted keyrings Dec 16 12:36:47.762466 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:36:47.762474 kernel: Key type asymmetric registered Dec 16 12:36:47.762481 kernel: Asymmetric key parser 'x509' registered Dec 16 12:36:47.762488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:36:47.762496 kernel: io scheduler mq-deadline registered Dec 16 12:36:47.762503 kernel: io scheduler kyber registered Dec 16 12:36:47.762511 kernel: io scheduler bfq registered Dec 16 12:36:47.762518 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:36:47.762526 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:36:47.762533 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:36:47.762595 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 12:36:47.762605 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:36:47.762612 kernel: thunder_xcv, ver 1.0 Dec 16 12:36:47.762619 kernel: thunder_bgx, ver 1.0 Dec 16 12:36:47.762626 kernel: nicpf, ver 1.0 Dec 16 12:36:47.762636 kernel: nicvf, ver 1.0 Dec 16 12:36:47.762717 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:36:47.762779 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:36:47 UTC (1765888607) Dec 16 12:36:47.762790 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:36:47.762797 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:36:47.762805 kernel: watchdog: NMI not fully supported Dec 16 12:36:47.762812 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:36:47.762835 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:36:47.762846 kernel: Segment Routing with IPv6 Dec 16 12:36:47.762853 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:36:47.762860 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:36:47.762867 kernel: Key type dns_resolver registered Dec 16 12:36:47.762874 kernel: registered taskstats version 1 Dec 16 12:36:47.762882 kernel: Loading compiled-in X.509 certificates Dec 16 12:36:47.762890 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:36:47.762897 kernel: Demotion targets for Node 0: null Dec 16 12:36:47.762904 kernel: Key type .fscrypt registered Dec 16 12:36:47.762914 kernel: Key type fscrypt-provisioning registered Dec 16 12:36:47.762921 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:36:47.762928 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:36:47.762935 kernel: ima: No architecture policies found Dec 16 12:36:47.762943 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:36:47.762950 kernel: clk: Disabling unused clocks Dec 16 12:36:47.762957 kernel: PM: genpd: Disabling unused power domains Dec 16 12:36:47.762964 kernel: Warning: unable to open an initial console. Dec 16 12:36:47.762972 kernel: Freeing unused kernel memory: 39552K Dec 16 12:36:47.762980 kernel: Run /init as init process Dec 16 12:36:47.762988 kernel: with arguments: Dec 16 12:36:47.762995 kernel: /init Dec 16 12:36:47.763002 kernel: with environment: Dec 16 12:36:47.763009 kernel: HOME=/ Dec 16 12:36:47.763016 kernel: TERM=linux Dec 16 12:36:47.763024 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:36:47.763035 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:36:47.763045 systemd[1]: Detected virtualization kvm. Dec 16 12:36:47.763052 systemd[1]: Detected architecture arm64. Dec 16 12:36:47.763059 systemd[1]: Running in initrd. Dec 16 12:36:47.763066 systemd[1]: No hostname configured, using default hostname. Dec 16 12:36:47.763074 systemd[1]: Hostname set to . Dec 16 12:36:47.763082 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:36:47.763089 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:36:47.763097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:36:47.763106 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:36:47.763114 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:36:47.763122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:36:47.763130 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:36:47.763139 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:36:47.763147 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:36:47.763157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:36:47.763165 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:36:47.763173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:36:47.763181 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:36:47.763189 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:36:47.763197 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:36:47.763204 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:36:47.763212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:36:47.763220 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:36:47.763229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:36:47.763237 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:36:47.763245 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:36:47.763253 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:36:47.763260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:36:47.763268 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:36:47.763276 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:36:47.763283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:36:47.763293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:36:47.763301 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:36:47.763308 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:36:47.763316 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:36:47.763324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:36:47.763331 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:36:47.763339 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:36:47.763349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:36:47.763357 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:36:47.763365 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:36:47.763393 systemd-journald[243]: Collecting audit messages is disabled. Dec 16 12:36:47.763416 systemd-journald[243]: Journal started Dec 16 12:36:47.763435 systemd-journald[243]: Runtime Journal (/run/log/journal/868fe84d11e043e59c8969e1194fc4e4) is 6M, max 48.5M, 42.4M free. Dec 16 12:36:47.772963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:36:47.773017 kernel: Bridge firewalling registered Dec 16 12:36:47.773027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:36:47.753363 systemd-modules-load[245]: Inserted module 'overlay' Dec 16 12:36:47.768682 systemd-modules-load[245]: Inserted module 'br_netfilter' Dec 16 12:36:47.776892 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:36:47.778089 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:36:47.781642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:36:47.783579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:36:47.785465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:36:47.796068 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:36:47.798754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:36:47.807230 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:36:47.808333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:36:47.812064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:36:47.813372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:36:47.817272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:36:47.818393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:36:47.821001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:36:47.853493 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:36:47.867300 systemd-resolved[290]: Positive Trust Anchors: Dec 16 12:36:47.867319 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:36:47.867351 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:36:47.872733 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 16 12:36:47.873865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:36:47.876566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:36:47.936856 kernel: SCSI subsystem initialized Dec 16 12:36:47.941841 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:36:47.949864 kernel: iscsi: registered transport (tcp) Dec 16 12:36:47.962842 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:36:47.962875 kernel: QLogic iSCSI HBA Driver Dec 16 12:36:47.980985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:36:48.005149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:36:48.007296 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:36:48.057212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:36:48.059602 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:36:48.124866 kernel: raid6: neonx8 gen() 15098 MB/s Dec 16 12:36:48.140881 kernel: raid6: neonx4 gen() 15730 MB/s Dec 16 12:36:48.157877 kernel: raid6: neonx2 gen() 13029 MB/s Dec 16 12:36:48.174885 kernel: raid6: neonx1 gen() 6219 MB/s Dec 16 12:36:48.192004 kernel: raid6: int64x8 gen() 6051 MB/s Dec 16 12:36:48.208874 kernel: raid6: int64x4 gen() 7220 MB/s Dec 16 12:36:48.225870 kernel: raid6: int64x2 gen() 5978 MB/s Dec 16 12:36:48.242882 kernel: raid6: int64x1 gen() 5052 MB/s Dec 16 12:36:48.242950 kernel: raid6: using algorithm neonx4 gen() 15730 MB/s Dec 16 12:36:48.260870 kernel: raid6: .... xor() 12263 MB/s, rmw enabled Dec 16 12:36:48.260932 kernel: raid6: using neon recovery algorithm Dec 16 12:36:48.266221 kernel: xor: measuring software checksum speed Dec 16 12:36:48.266272 kernel: 8regs : 20770 MB/sec Dec 16 12:36:48.266843 kernel: 32regs : 21658 MB/sec Dec 16 12:36:48.267847 kernel: arm64_neon : 24870 MB/sec Dec 16 12:36:48.267870 kernel: xor: using function: arm64_neon (24870 MB/sec) Dec 16 12:36:48.321855 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:36:48.329051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:36:48.331564 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:36:48.357976 systemd-udevd[504]: Using default interface naming scheme 'v255'. Dec 16 12:36:48.362234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:36:48.364723 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:36:48.394113 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Dec 16 12:36:48.422437 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:36:48.424778 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:36:48.488047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:36:48.491478 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:36:48.553679 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 12:36:48.556949 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 16 12:36:48.563932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:36:48.572175 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:36:48.572198 kernel: GPT:9289727 != 19775487 Dec 16 12:36:48.572207 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:36:48.572216 kernel: GPT:9289727 != 19775487 Dec 16 12:36:48.572224 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:36:48.572233 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:36:48.564062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:36:48.573776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:36:48.577074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:36:48.602870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:36:48.610901 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:36:48.612050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:36:48.632983 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:36:48.644714 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 12:36:48.646027 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:36:48.657827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:36:48.659450 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:36:48.661525 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:36:48.664210 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:36:48.667406 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:36:48.670965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:36:48.691954 disk-uuid[595]: Primary Header is updated. Dec 16 12:36:48.691954 disk-uuid[595]: Secondary Entries is updated. Dec 16 12:36:48.691954 disk-uuid[595]: Secondary Header is updated. Dec 16 12:36:48.694344 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:36:48.697860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:36:48.701849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:36:49.711844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:36:49.712037 disk-uuid[600]: The operation has completed successfully. Dec 16 12:36:49.737854 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:36:49.737951 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:36:49.763969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:36:49.788146 sh[614]: Success Dec 16 12:36:49.800903 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:36:49.800964 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:36:49.800976 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:36:49.809841 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:36:49.838770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:36:49.841833 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:36:49.859476 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:36:49.867845 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (626) Dec 16 12:36:49.869645 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:36:49.869663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:36:49.874852 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:36:49.874899 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:36:49.876598 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:36:49.877629 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:36:49.879030 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:36:49.879948 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:36:49.883196 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:36:49.906854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Dec 16 12:36:49.908927 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:36:49.908979 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:36:49.912111 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:36:49.912182 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:36:49.917838 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:36:49.919866 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:36:49.921899 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:36:50.002031 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:36:50.010002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:36:50.036573 ignition[704]: Ignition 2.22.0 Dec 16 12:36:50.036587 ignition[704]: Stage: fetch-offline Dec 16 12:36:50.036626 ignition[704]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:50.036634 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:50.036730 ignition[704]: parsed url from cmdline: "" Dec 16 12:36:50.036734 ignition[704]: no config URL provided Dec 16 12:36:50.036738 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:36:50.036745 ignition[704]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:36:50.036768 ignition[704]: op(1): [started] loading QEMU firmware config module Dec 16 12:36:50.036771 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:36:50.043558 ignition[704]: op(1): [finished] loading QEMU firmware config module Dec 16 12:36:50.057215 systemd-networkd[808]: lo: Link UP Dec 16 12:36:50.057229 systemd-networkd[808]: lo: Gained carrier Dec 16 12:36:50.057952 systemd-networkd[808]: Enumeration completed Dec 16 12:36:50.058100 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:36:50.058787 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:36:50.058791 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:36:50.059265 systemd-networkd[808]: eth0: Link UP Dec 16 12:36:50.060018 systemd[1]: Reached target network.target - Network. Dec 16 12:36:50.060019 systemd-networkd[808]: eth0: Gained carrier Dec 16 12:36:50.060030 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:36:50.093920 systemd-networkd[808]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:36:50.103988 ignition[704]: parsing config with SHA512: 85bb6581e2ee826c3288c17d3272f5e9ecbb623c8479a2d5d41d8946a42cc0528b93ddfa26c159be163cc8ab2951a28ddafe77bc5b516268e523f4ce37b7aaf1 Dec 16 12:36:50.108324 unknown[704]: fetched base config from "system" Dec 16 12:36:50.108336 unknown[704]: fetched user config from "qemu" Dec 16 12:36:50.108697 ignition[704]: fetch-offline: fetch-offline passed Dec 16 12:36:50.108754 ignition[704]: Ignition finished successfully Dec 16 12:36:50.111328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:36:50.113262 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:36:50.114066 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:36:50.150649 ignition[816]: Ignition 2.22.0 Dec 16 12:36:50.150663 ignition[816]: Stage: kargs Dec 16 12:36:50.150833 ignition[816]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:50.150843 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:50.151598 ignition[816]: kargs: kargs passed Dec 16 12:36:50.151654 ignition[816]: Ignition finished successfully Dec 16 12:36:50.157530 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:36:50.159559 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:36:50.192652 ignition[825]: Ignition 2.22.0 Dec 16 12:36:50.192669 ignition[825]: Stage: disks Dec 16 12:36:50.192838 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:50.192848 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:50.193648 ignition[825]: disks: disks passed Dec 16 12:36:50.193697 ignition[825]: Ignition finished successfully Dec 16 12:36:50.197670 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:36:50.200207 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:36:50.201912 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:36:50.203698 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:36:50.205508 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:36:50.207214 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:36:50.212953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:36:50.244437 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:36:50.248579 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:36:50.251087 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:36:50.337868 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:36:50.337731 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:36:50.339034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:36:50.342626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:36:50.344248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:36:50.345202 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:36:50.345247 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:36:50.345274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:36:50.365007 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:36:50.367814 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:36:50.370950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (843) Dec 16 12:36:50.372955 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:36:50.373005 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:36:50.375849 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:36:50.375904 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:36:50.377718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:36:50.418806 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:36:50.423503 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:36:50.427229 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:36:50.431102 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:36:50.506258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:36:50.508427 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:36:50.510132 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:36:50.527861 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:36:50.539466 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:36:50.564575 ignition[958]: INFO : Ignition 2.22.0 Dec 16 12:36:50.564575 ignition[958]: INFO : Stage: mount Dec 16 12:36:50.566118 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:50.566118 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:50.566118 ignition[958]: INFO : mount: mount passed Dec 16 12:36:50.566118 ignition[958]: INFO : Ignition finished successfully Dec 16 12:36:50.568927 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:36:50.571472 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:36:50.867079 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:36:50.868620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:36:50.886865 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (970) Dec 16 12:36:50.886917 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:36:50.889142 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:36:50.892280 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:36:50.892320 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:36:50.893780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:36:50.937786 ignition[987]: INFO : Ignition 2.22.0 Dec 16 12:36:50.937786 ignition[987]: INFO : Stage: files Dec 16 12:36:50.939244 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:50.939244 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:50.939244 ignition[987]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:36:50.942151 ignition[987]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:36:50.942151 ignition[987]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:36:50.944377 ignition[987]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:36:50.944377 ignition[987]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:36:50.944377 ignition[987]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:36:50.944320 unknown[987]: wrote ssh authorized keys file for user: core Dec 16 12:36:50.948957 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:36:50.948957 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:36:50.990197 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:36:51.162838 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:36:51.162838 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:36:51.166976 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:36:51.181624 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:36:51.300144 systemd-networkd[808]: eth0: Gained IPv6LL Dec 16 12:36:51.498838 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:36:51.730662 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:36:51.730662 ignition[987]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:36:51.734296 ignition[987]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:36:51.736663 ignition[987]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:36:51.754461 ignition[987]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:36:51.758069 ignition[987]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:36:51.759628 ignition[987]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:36:51.759628 ignition[987]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:36:51.759628 ignition[987]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:36:51.759628 ignition[987]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:36:51.759628 ignition[987]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:36:51.759628 ignition[987]: INFO : files: files passed Dec 16 12:36:51.759628 ignition[987]: INFO : Ignition finished successfully Dec 16 12:36:51.760416 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:36:51.763934 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:36:51.768171 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:36:51.783053 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:36:51.783155 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:36:51.786405 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:36:51.787961 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:36:51.787961 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:36:51.793247 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:36:51.790704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:36:51.792487 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:36:51.795217 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:36:51.843109 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:36:51.843258 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:36:51.845480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:36:51.847169 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:36:51.850878 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:36:51.852107 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:36:51.889849 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:36:51.894104 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:36:51.923747 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:36:51.926767 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:36:51.929239 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:36:51.930149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:36:51.930282 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:36:51.932887 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:36:51.936924 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:36:51.937802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:36:51.939508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:36:51.941660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:36:51.943698 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:36:51.945972 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:36:51.948080 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:36:51.950245 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:36:51.952217 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:36:51.954483 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:36:51.956544 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:36:51.956699 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:36:51.959551 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:36:51.961809 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:36:51.963913 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:36:51.964975 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:36:51.967027 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:36:51.967162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:36:51.970561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:36:51.971914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:36:51.973569 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:36:51.975459 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:36:51.975695 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:36:51.977734 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:36:51.979414 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:36:51.981142 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:36:51.981265 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:36:51.984030 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:36:51.984149 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:36:51.985849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:36:51.986017 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:36:51.987628 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:36:51.987792 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:36:51.990369 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:36:51.992643 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:36:51.993976 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:36:51.994172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:36:51.996304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:36:51.996458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:36:52.004092 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:36:52.007128 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:36:52.017215 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:36:52.022603 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:36:52.023715 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:36:52.026068 ignition[1042]: INFO : Ignition 2.22.0 Dec 16 12:36:52.026068 ignition[1042]: INFO : Stage: umount Dec 16 12:36:52.028784 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:36:52.028784 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:36:52.028784 ignition[1042]: INFO : umount: umount passed Dec 16 12:36:52.028784 ignition[1042]: INFO : Ignition finished successfully Dec 16 12:36:52.029774 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:36:52.029912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:36:52.031952 systemd[1]: Stopped target network.target - Network. Dec 16 12:36:52.034305 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:36:52.034390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:36:52.035405 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:36:52.035457 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:36:52.037232 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:36:52.037292 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:36:52.038740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:36:52.038788 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:36:52.040628 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:36:52.040702 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:36:52.043529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:36:52.045242 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:36:52.057299 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:36:52.057417 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:36:52.062579 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:36:52.063620 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:36:52.063787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:36:52.067303 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:36:52.068286 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:36:52.070495 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:36:52.070619 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:36:52.075126 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:36:52.076514 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:36:52.076597 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:36:52.080551 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:36:52.080696 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:36:52.087780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:36:52.087856 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:36:52.089810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:36:52.090642 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:36:52.094004 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:36:52.100441 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:36:52.100519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:36:52.111652 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:36:52.111774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:36:52.115601 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:36:52.115752 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:36:52.119346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:36:52.119406 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:36:52.122260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:36:52.122325 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:36:52.124790 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:36:52.124873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:36:52.127661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:36:52.127723 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:36:52.130837 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:36:52.131008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:36:52.135292 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:36:52.137536 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:36:52.137702 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:36:52.140940 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:36:52.141005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:36:52.144259 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 12:36:52.144316 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:36:52.147844 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:36:52.147909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:36:52.150618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:36:52.150681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:36:52.155009 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:36:52.155098 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 12:36:52.155136 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:36:52.155168 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:36:52.159729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:36:52.159843 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:36:52.161587 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:36:52.164789 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:36:52.190573 systemd[1]: Switching root. Dec 16 12:36:52.233036 systemd-journald[243]: Journal stopped Dec 16 12:36:53.120766 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Dec 16 12:36:53.120833 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:36:53.120847 kernel: SELinux: policy capability open_perms=1 Dec 16 12:36:53.120856 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:36:53.120866 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:36:53.120887 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:36:53.120900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:36:53.120910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:36:53.120919 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:36:53.120928 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:36:53.120937 kernel: audit: type=1403 audit(1765888612.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:36:53.120947 systemd[1]: Successfully loaded SELinux policy in 60.870ms. Dec 16 12:36:53.120972 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.746ms. Dec 16 12:36:53.120987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:36:53.120998 systemd[1]: Detected virtualization kvm. Dec 16 12:36:53.121008 systemd[1]: Detected architecture arm64. Dec 16 12:36:53.121019 systemd[1]: Detected first boot. Dec 16 12:36:53.121029 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:36:53.121039 zram_generator::config[1088]: No configuration found. Dec 16 12:36:53.121050 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:36:53.121061 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:36:53.121074 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:36:53.121084 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:36:53.121095 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:36:53.121106 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:36:53.121118 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:36:53.121128 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:36:53.121140 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:36:53.121150 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:36:53.121163 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:36:53.121174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:36:53.121184 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:36:53.121195 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:36:53.121205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:36:53.121216 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:36:53.121227 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:36:53.121237 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:36:53.121248 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:36:53.121260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:36:53.121271 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:36:53.121286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:36:53.121297 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:36:53.121307 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:36:53.121317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:36:53.121327 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:36:53.121338 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:36:53.121364 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:36:53.121375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:36:53.121402 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:36:53.121413 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:36:53.121425 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:36:53.121435 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:36:53.121445 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:36:53.121455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:36:53.121466 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:36:53.121477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:36:53.121488 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:36:53.121498 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:36:53.121509 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:36:53.121520 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:36:53.121530 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:36:53.121540 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:36:53.121551 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:36:53.121562 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:36:53.121574 systemd[1]: Reached target machines.target - Containers. Dec 16 12:36:53.121585 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:36:53.121595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:36:53.121605 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:36:53.121615 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:36:53.121625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:36:53.121643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:36:53.121654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:36:53.121665 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:36:53.121675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:36:53.121685 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:36:53.121695 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:36:53.121705 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:36:53.121715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:36:53.121724 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:36:53.121736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:36:53.121747 kernel: fuse: init (API version 7.41) Dec 16 12:36:53.121756 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:36:53.121766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:36:53.121776 kernel: loop: module loaded Dec 16 12:36:53.121786 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:36:53.121796 kernel: ACPI: bus type drm_connector registered Dec 16 12:36:53.121805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:36:53.121815 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:36:53.121836 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:36:53.121848 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:36:53.121858 systemd[1]: Stopped verity-setup.service. Dec 16 12:36:53.121868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:36:53.121878 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:36:53.121888 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:36:53.121926 systemd-journald[1156]: Collecting audit messages is disabled. Dec 16 12:36:53.121950 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:36:53.121961 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:36:53.121972 systemd-journald[1156]: Journal started Dec 16 12:36:53.122035 systemd-journald[1156]: Runtime Journal (/run/log/journal/868fe84d11e043e59c8969e1194fc4e4) is 6M, max 48.5M, 42.4M free. Dec 16 12:36:52.887020 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:36:52.914972 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:36:52.915372 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:36:53.125359 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:36:53.126213 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:36:53.128855 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:36:53.130109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:36:53.131549 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:36:53.131737 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:36:53.133060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:36:53.133211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:36:53.134367 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:36:53.134527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:36:53.135739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:36:53.135918 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:36:53.137322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:36:53.137475 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:36:53.140153 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:36:53.140330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:36:53.141650 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:36:53.143880 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:36:53.145316 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:36:53.146858 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:36:53.159527 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:36:53.161888 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:36:53.163871 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:36:53.164914 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:36:53.164945 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:36:53.166867 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:36:53.174738 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:36:53.175979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:36:53.177064 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:36:53.178873 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:36:53.180003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:36:53.183996 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:36:53.185099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:36:53.186745 systemd-journald[1156]: Time spent on flushing to /var/log/journal/868fe84d11e043e59c8969e1194fc4e4 is 15.198ms for 884 entries. Dec 16 12:36:53.186745 systemd-journald[1156]: System Journal (/var/log/journal/868fe84d11e043e59c8969e1194fc4e4) is 8M, max 195.6M, 187.6M free. Dec 16 12:36:53.218066 systemd-journald[1156]: Received client request to flush runtime journal. Dec 16 12:36:53.218123 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:36:53.189178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:36:53.192396 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:36:53.197021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:36:53.200123 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:36:53.201758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:36:53.205316 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:36:53.206770 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:36:53.211268 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:36:53.224079 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:36:53.227149 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:36:53.234860 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:36:53.238362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:36:53.240162 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 16 12:36:53.240182 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 16 12:36:53.243348 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:36:53.248119 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:36:53.253875 kernel: loop1: detected capacity change from 0 to 207008 Dec 16 12:36:53.265046 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:36:53.285883 kernel: loop2: detected capacity change from 0 to 119840 Dec 16 12:36:53.290530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:36:53.295040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:36:53.321010 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 16 12:36:53.321028 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 16 12:36:53.321874 kernel: loop3: detected capacity change from 0 to 100632 Dec 16 12:36:53.327490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:36:53.330852 kernel: loop4: detected capacity change from 0 to 207008 Dec 16 12:36:53.337855 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:36:53.342725 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 16 12:36:53.343138 (sd-merge)[1230]: Merged extensions into '/usr'. Dec 16 12:36:53.349023 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:36:53.349047 systemd[1]: Reloading... Dec 16 12:36:53.412022 zram_generator::config[1260]: No configuration found. Dec 16 12:36:53.497676 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:36:53.553343 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:36:53.553436 systemd[1]: Reloading finished in 204 ms. Dec 16 12:36:53.570704 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:36:53.573841 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:36:53.589058 systemd[1]: Starting ensure-sysext.service... Dec 16 12:36:53.590845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:36:53.600088 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:36:53.600107 systemd[1]: Reloading... Dec 16 12:36:53.612275 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:36:53.612321 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:36:53.612604 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:36:53.612883 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:36:53.614304 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:36:53.614763 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 16 12:36:53.614914 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Dec 16 12:36:53.634734 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:36:53.634748 systemd-tmpfiles[1293]: Skipping /boot Dec 16 12:36:53.645853 zram_generator::config[1318]: No configuration found. Dec 16 12:36:53.648655 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:36:53.648668 systemd-tmpfiles[1293]: Skipping /boot Dec 16 12:36:53.791774 systemd[1]: Reloading finished in 191 ms. Dec 16 12:36:53.812612 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:36:53.814140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:36:53.832792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:36:53.835183 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:36:53.837142 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:36:53.841619 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:36:53.845197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:36:53.849714 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:36:53.859248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:36:53.864189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:36:53.866470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:36:53.869279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:36:53.870476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:36:53.870654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:36:53.873364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:36:53.875544 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:36:53.877385 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:36:53.877565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:36:53.884218 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:36:53.884432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:36:53.892291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:36:53.895362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:36:53.895562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:36:53.898087 augenrules[1390]: No rules Dec 16 12:36:53.900374 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:36:53.900610 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:36:53.900785 systemd-udevd[1361]: Using default interface naming scheme 'v255'. Dec 16 12:36:53.902461 systemd[1]: Finished ensure-sysext.service. Dec 16 12:36:53.903672 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:36:53.909388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:36:53.910751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:36:53.914147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:36:53.923028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:36:53.924123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:36:53.924187 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:36:53.924230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:36:53.926671 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:36:53.929672 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:36:53.930780 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:36:53.933299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:36:53.935015 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:36:53.937604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:36:53.937927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:36:53.940264 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:36:53.940993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:36:53.943344 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:36:53.943611 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:36:53.994403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:36:53.995366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:36:53.996728 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:36:54.004374 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:36:54.047938 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:36:54.052075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:36:54.081892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:36:54.095679 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:36:54.098122 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:36:54.104523 systemd-resolved[1360]: Positive Trust Anchors: Dec 16 12:36:54.104537 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:36:54.104542 systemd-networkd[1440]: lo: Link UP Dec 16 12:36:54.104547 systemd-networkd[1440]: lo: Gained carrier Dec 16 12:36:54.104568 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:36:54.105437 systemd-networkd[1440]: Enumeration completed Dec 16 12:36:54.106002 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:36:54.106010 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:36:54.106413 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:36:54.109143 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:36:54.111652 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:36:54.113142 systemd-networkd[1440]: eth0: Link UP Dec 16 12:36:54.113272 systemd-networkd[1440]: eth0: Gained carrier Dec 16 12:36:54.113297 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:36:54.113742 systemd-resolved[1360]: Defaulting to hostname 'linux'. Dec 16 12:36:54.115810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:36:54.117020 systemd[1]: Reached target network.target - Network. Dec 16 12:36:54.117851 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:36:54.118805 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:36:54.120261 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:36:54.121747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:36:54.123361 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:36:54.124879 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:36:54.126115 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:36:54.127233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:36:54.127270 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:36:54.128262 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:36:54.131101 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:36:54.133689 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:36:54.135919 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:36:54.136621 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:36:54.138224 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:36:54.138975 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Dec 16 12:36:54.139375 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:36:54.141179 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:36:54.142043 systemd-timesyncd[1405]: Initial clock synchronization to Tue 2025-12-16 12:36:53.816112 UTC. Dec 16 12:36:54.152018 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:36:54.154318 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:36:54.156646 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:36:54.158141 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:36:54.159846 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:36:54.160689 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:36:54.161741 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:36:54.161775 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:36:54.164417 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:36:54.167536 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:36:54.171071 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:36:54.174050 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:36:54.179286 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:36:54.180267 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:36:54.182181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:36:54.184293 jq[1476]: false Dec 16 12:36:54.184969 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:36:54.187369 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:36:54.191056 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:36:54.194534 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:36:54.196502 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:36:54.197737 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:36:54.198963 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:36:54.204024 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:36:54.211173 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:36:54.216470 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:36:54.216713 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:36:54.217648 extend-filesystems[1477]: Found /dev/vda6 Dec 16 12:36:54.220280 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:36:54.220491 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:36:54.222054 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:36:54.222254 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:36:54.224312 jq[1491]: true Dec 16 12:36:54.225809 extend-filesystems[1477]: Found /dev/vda9 Dec 16 12:36:54.236858 extend-filesystems[1477]: Checking size of /dev/vda9 Dec 16 12:36:54.239764 update_engine[1488]: I20251216 12:36:54.236618 1488 main.cc:92] Flatcar Update Engine starting Dec 16 12:36:54.240011 tar[1497]: linux-arm64/LICENSE Dec 16 12:36:54.240685 tar[1497]: linux-arm64/helm Dec 16 12:36:54.243917 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:36:54.249891 jq[1501]: true Dec 16 12:36:54.255100 extend-filesystems[1477]: Resized partition /dev/vda9 Dec 16 12:36:54.261356 extend-filesystems[1518]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:36:54.275091 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 16 12:36:54.282766 dbus-daemon[1474]: [system] SELinux support is enabled Dec 16 12:36:54.284344 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:36:54.289457 update_engine[1488]: I20251216 12:36:54.289376 1488 update_check_scheduler.cc:74] Next update check in 6m26s Dec 16 12:36:54.290261 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:36:54.290402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:36:54.293110 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:36:54.293243 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:36:54.295330 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:36:54.299315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:36:54.305646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:36:54.338064 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 16 12:36:54.356675 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:36:54.356675 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:36:54.356675 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 16 12:36:54.363954 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Dec 16 12:36:54.359925 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:36:54.360136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:36:54.374749 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:36:54.377465 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:36:54.379779 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:36:54.393542 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:36:54.411159 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:36:54.411935 systemd-logind[1482]: New seat seat0. Dec 16 12:36:54.420915 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:36:54.428232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:36:54.487710 containerd[1499]: time="2025-12-16T12:36:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:36:54.488590 containerd[1499]: time="2025-12-16T12:36:54.488539360Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498427160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="56.48µs" Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498510160Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498546440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498743760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498770160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498804720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498881640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:36:54.499838 containerd[1499]: time="2025-12-16T12:36:54.498899680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501120 containerd[1499]: time="2025-12-16T12:36:54.501071680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501120 containerd[1499]: time="2025-12-16T12:36:54.501113480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501179 containerd[1499]: time="2025-12-16T12:36:54.501129960Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501179 containerd[1499]: time="2025-12-16T12:36:54.501139080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501291 containerd[1499]: time="2025-12-16T12:36:54.501271000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501509 containerd[1499]: time="2025-12-16T12:36:54.501485880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501537 containerd[1499]: time="2025-12-16T12:36:54.501523480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:36:54.501537 containerd[1499]: time="2025-12-16T12:36:54.501534400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:36:54.501611 containerd[1499]: time="2025-12-16T12:36:54.501595600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:36:54.501911 containerd[1499]: time="2025-12-16T12:36:54.501892120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:36:54.501993 containerd[1499]: time="2025-12-16T12:36:54.501973800Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:36:54.507993 containerd[1499]: time="2025-12-16T12:36:54.507941400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:36:54.508124 containerd[1499]: time="2025-12-16T12:36:54.508035240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:36:54.508124 containerd[1499]: time="2025-12-16T12:36:54.508052480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:36:54.508124 containerd[1499]: time="2025-12-16T12:36:54.508066320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:36:54.508124 containerd[1499]: time="2025-12-16T12:36:54.508083840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:36:54.508124 containerd[1499]: time="2025-12-16T12:36:54.508096520Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:36:54.508225 containerd[1499]: time="2025-12-16T12:36:54.508133480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:36:54.508225 containerd[1499]: time="2025-12-16T12:36:54.508149960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:36:54.508225 containerd[1499]: time="2025-12-16T12:36:54.508167800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:36:54.508225 containerd[1499]: time="2025-12-16T12:36:54.508180720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:36:54.508225 containerd[1499]: time="2025-12-16T12:36:54.508194160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:36:54.508306 containerd[1499]: time="2025-12-16T12:36:54.508232080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:36:54.508411 containerd[1499]: time="2025-12-16T12:36:54.508388240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:36:54.508436 containerd[1499]: time="2025-12-16T12:36:54.508417960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:36:54.508436 containerd[1499]: time="2025-12-16T12:36:54.508433400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:36:54.508469 containerd[1499]: time="2025-12-16T12:36:54.508447880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:36:54.508469 containerd[1499]: time="2025-12-16T12:36:54.508460440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:36:54.508500 containerd[1499]: time="2025-12-16T12:36:54.508470960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:36:54.508500 containerd[1499]: time="2025-12-16T12:36:54.508483360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:36:54.508500 containerd[1499]: time="2025-12-16T12:36:54.508493680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:36:54.508555 containerd[1499]: time="2025-12-16T12:36:54.508506000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:36:54.508555 containerd[1499]: time="2025-12-16T12:36:54.508517120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:36:54.508555 containerd[1499]: time="2025-12-16T12:36:54.508527640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:36:54.508733 containerd[1499]: time="2025-12-16T12:36:54.508715360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:36:54.508761 containerd[1499]: time="2025-12-16T12:36:54.508735720Z" level=info msg="Start snapshots syncer" Dec 16 12:36:54.508785 containerd[1499]: time="2025-12-16T12:36:54.508759360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:36:54.509101 containerd[1499]: time="2025-12-16T12:36:54.509058640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:36:54.509216 containerd[1499]: time="2025-12-16T12:36:54.509123040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:36:54.509216 containerd[1499]: time="2025-12-16T12:36:54.509177200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:36:54.509338 containerd[1499]: time="2025-12-16T12:36:54.509315440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:36:54.509368 containerd[1499]: time="2025-12-16T12:36:54.509348680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:36:54.509368 containerd[1499]: time="2025-12-16T12:36:54.509362320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:36:54.509401 containerd[1499]: time="2025-12-16T12:36:54.509377560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:36:54.509401 containerd[1499]: time="2025-12-16T12:36:54.509391240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:36:54.509439 containerd[1499]: time="2025-12-16T12:36:54.509404560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:36:54.509439 containerd[1499]: time="2025-12-16T12:36:54.509421240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:36:54.509472 containerd[1499]: time="2025-12-16T12:36:54.509449440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:36:54.509472 containerd[1499]: time="2025-12-16T12:36:54.509462600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:36:54.509504 containerd[1499]: time="2025-12-16T12:36:54.509475080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:36:54.509522 containerd[1499]: time="2025-12-16T12:36:54.509513760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:36:54.509542 containerd[1499]: time="2025-12-16T12:36:54.509533520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:36:54.509561 containerd[1499]: time="2025-12-16T12:36:54.509542960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:36:54.509561 containerd[1499]: time="2025-12-16T12:36:54.509552720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:36:54.509593 containerd[1499]: time="2025-12-16T12:36:54.509561760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:36:54.509593 containerd[1499]: time="2025-12-16T12:36:54.509572040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:36:54.509637 containerd[1499]: time="2025-12-16T12:36:54.509599400Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.509690800Z" level=info msg="runtime interface created" Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.509701760Z" level=info msg="created NRI interface" Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.509711280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.509726000Z" level=info msg="Connect containerd service" Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.509749600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:36:54.510831 containerd[1499]: time="2025-12-16T12:36:54.510556600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:36:54.586050 containerd[1499]: time="2025-12-16T12:36:54.585928400Z" level=info msg="Start subscribing containerd event" Dec 16 12:36:54.586050 containerd[1499]: time="2025-12-16T12:36:54.586020320Z" level=info msg="Start recovering state" Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586120160Z" level=info msg="Start event monitor" Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586134600Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586142280Z" level=info msg="Start streaming server" Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586151760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586158960Z" level=info msg="runtime interface starting up..." Dec 16 12:36:54.586172 containerd[1499]: time="2025-12-16T12:36:54.586164240Z" level=info msg="starting plugins..." Dec 16 12:36:54.586267 containerd[1499]: time="2025-12-16T12:36:54.586229280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:36:54.586397 containerd[1499]: time="2025-12-16T12:36:54.586367200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:36:54.586430 containerd[1499]: time="2025-12-16T12:36:54.586422600Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:36:54.587947 containerd[1499]: time="2025-12-16T12:36:54.587918520Z" level=info msg="containerd successfully booted in 0.101183s" Dec 16 12:36:54.588513 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:36:54.591102 tar[1497]: linux-arm64/README.md Dec 16 12:36:54.613523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:36:55.232049 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:36:55.252302 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:36:55.256983 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:36:55.273638 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:36:55.273908 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:36:55.276470 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:36:55.297864 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:36:55.300603 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:36:55.302761 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:36:55.304003 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:36:55.715970 systemd-networkd[1440]: eth0: Gained IPv6LL Dec 16 12:36:55.718639 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:36:55.720262 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:36:55.723362 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:36:55.725591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:36:55.727637 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:36:55.753859 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:36:55.755209 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:36:55.755394 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:36:55.757407 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:36:56.310107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:36:56.311774 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:36:56.315845 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:36:56.316971 systemd[1]: Startup finished in 2.110s (kernel) + 4.852s (initrd) + 3.910s (userspace) = 10.873s. Dec 16 12:36:56.676710 kubelet[1612]: E1216 12:36:56.676619 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:36:56.678841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:36:56.678969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:36:56.679263 systemd[1]: kubelet.service: Consumed 758ms CPU time, 256.3M memory peak. Dec 16 12:37:00.263121 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:37:00.264048 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:54490.service - OpenSSH per-connection server daemon (10.0.0.1:54490). Dec 16 12:37:00.360166 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 54490 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:00.362055 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:00.367919 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:37:00.368780 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:37:00.373711 systemd-logind[1482]: New session 1 of user core. Dec 16 12:37:00.391880 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:37:00.394774 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:37:00.407868 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:37:00.410254 systemd-logind[1482]: New session c1 of user core. Dec 16 12:37:00.522238 systemd[1631]: Queued start job for default target default.target. Dec 16 12:37:00.542034 systemd[1631]: Created slice app.slice - User Application Slice. Dec 16 12:37:00.542066 systemd[1631]: Reached target paths.target - Paths. Dec 16 12:37:00.542102 systemd[1631]: Reached target timers.target - Timers. Dec 16 12:37:00.543390 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:37:00.552832 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:37:00.552897 systemd[1631]: Reached target sockets.target - Sockets. Dec 16 12:37:00.552934 systemd[1631]: Reached target basic.target - Basic System. Dec 16 12:37:00.552960 systemd[1631]: Reached target default.target - Main User Target. Dec 16 12:37:00.552984 systemd[1631]: Startup finished in 136ms. Dec 16 12:37:00.553112 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:37:00.554624 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:37:00.619011 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:54496.service - OpenSSH per-connection server daemon (10.0.0.1:54496). Dec 16 12:37:00.679614 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 54496 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:00.681068 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:00.685300 systemd-logind[1482]: New session 2 of user core. Dec 16 12:37:00.698027 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:37:00.748131 sshd[1645]: Connection closed by 10.0.0.1 port 54496 Dec 16 12:37:00.748614 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:00.761030 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:54496.service: Deactivated successfully. Dec 16 12:37:00.764230 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:37:00.764855 systemd-logind[1482]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:37:00.767053 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:54502.service - OpenSSH per-connection server daemon (10.0.0.1:54502). Dec 16 12:37:00.768077 systemd-logind[1482]: Removed session 2. Dec 16 12:37:00.826373 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 54502 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:00.827706 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:00.833728 systemd-logind[1482]: New session 3 of user core. Dec 16 12:37:00.842033 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:37:00.892430 sshd[1655]: Connection closed by 10.0.0.1 port 54502 Dec 16 12:37:00.893029 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:00.903406 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:54502.service: Deactivated successfully. Dec 16 12:37:00.906299 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:37:00.907034 systemd-logind[1482]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:37:00.910678 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:54506.service - OpenSSH per-connection server daemon (10.0.0.1:54506). Dec 16 12:37:00.911267 systemd-logind[1482]: Removed session 3. Dec 16 12:37:00.992127 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 54506 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:00.993369 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:00.997716 systemd-logind[1482]: New session 4 of user core. Dec 16 12:37:01.006977 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:37:01.058754 sshd[1664]: Connection closed by 10.0.0.1 port 54506 Dec 16 12:37:01.059125 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:01.070970 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:54506.service: Deactivated successfully. Dec 16 12:37:01.072689 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:37:01.074037 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:37:01.078513 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:43138.service - OpenSSH per-connection server daemon (10.0.0.1:43138). Dec 16 12:37:01.079203 systemd-logind[1482]: Removed session 4. Dec 16 12:37:01.146704 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 43138 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:01.147960 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:01.151900 systemd-logind[1482]: New session 5 of user core. Dec 16 12:37:01.158016 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:37:01.222984 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:37:01.223256 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:37:01.238713 sudo[1674]: pam_unix(sudo:session): session closed for user root Dec 16 12:37:01.240858 sshd[1673]: Connection closed by 10.0.0.1 port 43138 Dec 16 12:37:01.240874 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:01.250162 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:43138.service: Deactivated successfully. Dec 16 12:37:01.252921 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:37:01.254766 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:37:01.258277 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:43150.service - OpenSSH per-connection server daemon (10.0.0.1:43150). Dec 16 12:37:01.259533 systemd-logind[1482]: Removed session 5. Dec 16 12:37:01.323590 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 43150 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:01.325054 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:01.328857 systemd-logind[1482]: New session 6 of user core. Dec 16 12:37:01.341031 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:37:01.390629 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:37:01.391219 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:37:01.519897 sudo[1685]: pam_unix(sudo:session): session closed for user root Dec 16 12:37:01.525591 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:37:01.525867 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:37:01.534088 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:37:01.575559 augenrules[1707]: No rules Dec 16 12:37:01.576669 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:37:01.577905 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:37:01.579257 sudo[1684]: pam_unix(sudo:session): session closed for user root Dec 16 12:37:01.581361 sshd[1683]: Connection closed by 10.0.0.1 port 43150 Dec 16 12:37:01.581243 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:01.591120 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:43150.service: Deactivated successfully. Dec 16 12:37:01.593312 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:37:01.594012 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:37:01.596333 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:43162.service - OpenSSH per-connection server daemon (10.0.0.1:43162). Dec 16 12:37:01.596893 systemd-logind[1482]: Removed session 6. Dec 16 12:37:01.659429 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 43162 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:37:01.661025 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:37:01.665309 systemd-logind[1482]: New session 7 of user core. Dec 16 12:37:01.679012 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:37:01.728264 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:37:01.728874 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:37:02.027505 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:37:02.045205 (dockerd)[1740]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:37:02.270026 dockerd[1740]: time="2025-12-16T12:37:02.269939209Z" level=info msg="Starting up" Dec 16 12:37:02.271242 dockerd[1740]: time="2025-12-16T12:37:02.271203514Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:37:02.285723 dockerd[1740]: time="2025-12-16T12:37:02.285577187Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:37:02.316227 systemd[1]: var-lib-docker-metacopy\x2dcheck1149238046-merged.mount: Deactivated successfully. Dec 16 12:37:02.327620 dockerd[1740]: time="2025-12-16T12:37:02.327408039Z" level=info msg="Loading containers: start." Dec 16 12:37:02.340836 kernel: Initializing XFRM netlink socket Dec 16 12:37:02.568364 systemd-networkd[1440]: docker0: Link UP Dec 16 12:37:02.572870 dockerd[1740]: time="2025-12-16T12:37:02.572814860Z" level=info msg="Loading containers: done." Dec 16 12:37:02.591298 dockerd[1740]: time="2025-12-16T12:37:02.590635574Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:37:02.591298 dockerd[1740]: time="2025-12-16T12:37:02.590728350Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:37:02.591298 dockerd[1740]: time="2025-12-16T12:37:02.591012936Z" level=info msg="Initializing buildkit" Dec 16 12:37:02.622430 dockerd[1740]: time="2025-12-16T12:37:02.622390169Z" level=info msg="Completed buildkit initialization" Dec 16 12:37:02.629663 dockerd[1740]: time="2025-12-16T12:37:02.629607137Z" level=info msg="Daemon has completed initialization" Dec 16 12:37:02.629877 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:37:02.630754 dockerd[1740]: time="2025-12-16T12:37:02.629697118Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:37:03.156551 containerd[1499]: time="2025-12-16T12:37:03.156499828Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:37:03.701628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310268217.mount: Deactivated successfully. Dec 16 12:37:04.588622 containerd[1499]: time="2025-12-16T12:37:04.588574199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:04.589883 containerd[1499]: time="2025-12-16T12:37:04.589836591Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 16 12:37:04.591857 containerd[1499]: time="2025-12-16T12:37:04.591798809Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:04.594841 containerd[1499]: time="2025-12-16T12:37:04.594788049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:04.595783 containerd[1499]: time="2025-12-16T12:37:04.595744977Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.439202697s" Dec 16 12:37:04.595783 containerd[1499]: time="2025-12-16T12:37:04.595780814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:37:04.596371 containerd[1499]: time="2025-12-16T12:37:04.596326586Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:37:05.629754 containerd[1499]: time="2025-12-16T12:37:05.629696334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:05.630815 containerd[1499]: time="2025-12-16T12:37:05.630770610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 16 12:37:05.632027 containerd[1499]: time="2025-12-16T12:37:05.631982044Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:05.635013 containerd[1499]: time="2025-12-16T12:37:05.634970403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:05.636445 containerd[1499]: time="2025-12-16T12:37:05.636402889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.04004357s" Dec 16 12:37:05.636445 containerd[1499]: time="2025-12-16T12:37:05.636441235Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:37:05.636880 containerd[1499]: time="2025-12-16T12:37:05.636840403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:37:06.853364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:37:06.854915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:07.053029 containerd[1499]: time="2025-12-16T12:37:07.052984634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:07.055375 containerd[1499]: time="2025-12-16T12:37:07.055311479Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 16 12:37:07.056516 containerd[1499]: time="2025-12-16T12:37:07.056434614Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:07.064077 containerd[1499]: time="2025-12-16T12:37:07.064035749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:07.064939 containerd[1499]: time="2025-12-16T12:37:07.064898829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.427942008s" Dec 16 12:37:07.064939 containerd[1499]: time="2025-12-16T12:37:07.064936835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:37:07.065459 containerd[1499]: time="2025-12-16T12:37:07.065413344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:37:07.065971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:07.070802 (kubelet)[2031]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:37:07.113398 kubelet[2031]: E1216 12:37:07.113274 2031 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:37:07.116764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:37:07.117033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:37:07.117611 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.4M memory peak. Dec 16 12:37:07.998933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216096131.mount: Deactivated successfully. Dec 16 12:37:08.229552 containerd[1499]: time="2025-12-16T12:37:08.229503922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:08.230167 containerd[1499]: time="2025-12-16T12:37:08.230134694Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 16 12:37:08.231001 containerd[1499]: time="2025-12-16T12:37:08.230965503Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:08.233447 containerd[1499]: time="2025-12-16T12:37:08.233403086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:08.234345 containerd[1499]: time="2025-12-16T12:37:08.234294935Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.168838142s" Dec 16 12:37:08.234345 containerd[1499]: time="2025-12-16T12:37:08.234336197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:37:08.235056 containerd[1499]: time="2025-12-16T12:37:08.235012203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:37:08.726860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364250192.mount: Deactivated successfully. Dec 16 12:37:09.699585 containerd[1499]: time="2025-12-16T12:37:09.699532350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:09.701329 containerd[1499]: time="2025-12-16T12:37:09.701296072Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 16 12:37:09.704410 containerd[1499]: time="2025-12-16T12:37:09.704343183Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:09.707856 containerd[1499]: time="2025-12-16T12:37:09.707767477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:09.708800 containerd[1499]: time="2025-12-16T12:37:09.708745392Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.47369561s" Dec 16 12:37:09.708800 containerd[1499]: time="2025-12-16T12:37:09.708788640Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:37:09.709517 containerd[1499]: time="2025-12-16T12:37:09.709264321Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:37:10.144855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832547320.mount: Deactivated successfully. Dec 16 12:37:10.153368 containerd[1499]: time="2025-12-16T12:37:10.153303904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:37:10.154779 containerd[1499]: time="2025-12-16T12:37:10.154739551Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 16 12:37:10.155882 containerd[1499]: time="2025-12-16T12:37:10.155851861Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:37:10.158805 containerd[1499]: time="2025-12-16T12:37:10.158545694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:37:10.159642 containerd[1499]: time="2025-12-16T12:37:10.159609391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 450.291719ms" Dec 16 12:37:10.159694 containerd[1499]: time="2025-12-16T12:37:10.159641335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:37:10.160193 containerd[1499]: time="2025-12-16T12:37:10.160128291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:37:10.674938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845235350.mount: Deactivated successfully. Dec 16 12:37:12.004503 containerd[1499]: time="2025-12-16T12:37:12.004440639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:12.005291 containerd[1499]: time="2025-12-16T12:37:12.005245009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 16 12:37:12.006444 containerd[1499]: time="2025-12-16T12:37:12.006406633Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:12.011164 containerd[1499]: time="2025-12-16T12:37:12.011122280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:12.012234 containerd[1499]: time="2025-12-16T12:37:12.012186117Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.851848446s" Dec 16 12:37:12.012234 containerd[1499]: time="2025-12-16T12:37:12.012229972Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:37:16.826836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:16.826982 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.4M memory peak. Dec 16 12:37:16.828936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:16.852492 systemd[1]: Reload requested from client PID 2186 ('systemctl') (unit session-7.scope)... Dec 16 12:37:16.852510 systemd[1]: Reloading... Dec 16 12:37:16.931860 zram_generator::config[2225]: No configuration found. Dec 16 12:37:17.123006 systemd[1]: Reloading finished in 270 ms. Dec 16 12:37:17.180690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:17.184055 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:37:17.184276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:17.184320 systemd[1]: kubelet.service: Consumed 99ms CPU time, 95.2M memory peak. Dec 16 12:37:17.185910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:17.360321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:17.365658 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:37:17.407626 kubelet[2275]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:37:17.407626 kubelet[2275]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:37:17.407626 kubelet[2275]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:37:17.407953 kubelet[2275]: I1216 12:37:17.407596 2275 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:37:18.620463 kubelet[2275]: I1216 12:37:18.620389 2275 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:37:18.620463 kubelet[2275]: I1216 12:37:18.620426 2275 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:37:18.620875 kubelet[2275]: I1216 12:37:18.620709 2275 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:37:18.648873 kubelet[2275]: E1216 12:37:18.648799 2275 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:18.650188 kubelet[2275]: I1216 12:37:18.650167 2275 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:37:18.659256 kubelet[2275]: I1216 12:37:18.659203 2275 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:37:18.662158 kubelet[2275]: I1216 12:37:18.662134 2275 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:37:18.662948 kubelet[2275]: I1216 12:37:18.662903 2275 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:37:18.663127 kubelet[2275]: I1216 12:37:18.662954 2275 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:37:18.663221 kubelet[2275]: I1216 12:37:18.663200 2275 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:37:18.663221 kubelet[2275]: I1216 12:37:18.663209 2275 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:37:18.663423 kubelet[2275]: I1216 12:37:18.663407 2275 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:37:18.665932 kubelet[2275]: I1216 12:37:18.665887 2275 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:37:18.666034 kubelet[2275]: I1216 12:37:18.665924 2275 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:37:18.666070 kubelet[2275]: I1216 12:37:18.666052 2275 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:37:18.666070 kubelet[2275]: I1216 12:37:18.666066 2275 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:37:18.668932 kubelet[2275]: W1216 12:37:18.668880 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:18.668986 kubelet[2275]: E1216 12:37:18.668956 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:18.668986 kubelet[2275]: W1216 12:37:18.668951 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:18.669026 kubelet[2275]: E1216 12:37:18.668996 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:18.670394 kubelet[2275]: I1216 12:37:18.670375 2275 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:37:18.671030 kubelet[2275]: I1216 12:37:18.671014 2275 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:37:18.671186 kubelet[2275]: W1216 12:37:18.671172 2275 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:37:18.672619 kubelet[2275]: I1216 12:37:18.672045 2275 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:37:18.672619 kubelet[2275]: I1216 12:37:18.672086 2275 server.go:1287] "Started kubelet" Dec 16 12:37:18.676526 kubelet[2275]: I1216 12:37:18.676461 2275 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:37:18.676915 kubelet[2275]: E1216 12:37:18.676632 2275 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.86:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.86:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b25c5cf24b63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:37:18.672063331 +0000 UTC m=+1.303283315,LastTimestamp:2025-12-16 12:37:18.672063331 +0000 UTC m=+1.303283315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:37:18.677133 kubelet[2275]: I1216 12:37:18.677102 2275 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:37:18.677244 kubelet[2275]: I1216 12:37:18.677205 2275 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:37:18.677926 kubelet[2275]: I1216 12:37:18.677102 2275 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:37:18.678128 kubelet[2275]: I1216 12:37:18.678104 2275 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:37:18.678346 kubelet[2275]: I1216 12:37:18.678328 2275 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:37:18.678428 kubelet[2275]: I1216 12:37:18.678397 2275 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:37:18.678428 kubelet[2275]: I1216 12:37:18.678387 2275 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:37:18.678667 kubelet[2275]: I1216 12:37:18.678636 2275 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:37:18.679072 kubelet[2275]: W1216 12:37:18.679016 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:18.679121 kubelet[2275]: E1216 12:37:18.679083 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:18.679370 kubelet[2275]: E1216 12:37:18.679324 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="200ms" Dec 16 12:37:18.679704 kubelet[2275]: E1216 12:37:18.678353 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:18.680140 kubelet[2275]: E1216 12:37:18.680117 2275 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:37:18.680140 kubelet[2275]: I1216 12:37:18.680130 2275 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:37:18.680300 kubelet[2275]: I1216 12:37:18.680282 2275 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:37:18.681803 kubelet[2275]: I1216 12:37:18.681577 2275 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:37:18.690607 kubelet[2275]: I1216 12:37:18.690581 2275 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:37:18.690607 kubelet[2275]: I1216 12:37:18.690597 2275 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:37:18.690607 kubelet[2275]: I1216 12:37:18.690616 2275 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:37:18.780323 kubelet[2275]: E1216 12:37:18.780261 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:18.879873 kubelet[2275]: E1216 12:37:18.879730 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="400ms" Dec 16 12:37:18.880933 kubelet[2275]: E1216 12:37:18.880887 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:18.956071 kubelet[2275]: I1216 12:37:18.956009 2275 policy_none.go:49] "None policy: Start" Dec 16 12:37:18.956071 kubelet[2275]: I1216 12:37:18.956042 2275 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:37:18.956071 kubelet[2275]: I1216 12:37:18.956056 2275 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:37:18.959862 kubelet[2275]: I1216 12:37:18.959795 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:37:18.961686 kubelet[2275]: I1216 12:37:18.961662 2275 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:37:18.961776 kubelet[2275]: I1216 12:37:18.961696 2275 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:37:18.961776 kubelet[2275]: I1216 12:37:18.961720 2275 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:37:18.961776 kubelet[2275]: I1216 12:37:18.961728 2275 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:37:18.962464 kubelet[2275]: E1216 12:37:18.962439 2275 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:37:18.962673 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:37:18.965053 kubelet[2275]: W1216 12:37:18.964962 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:18.965053 kubelet[2275]: E1216 12:37:18.965007 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:18.974990 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:37:18.978068 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:37:18.981477 kubelet[2275]: E1216 12:37:18.981448 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:18.994774 kubelet[2275]: I1216 12:37:18.994733 2275 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:37:18.995308 kubelet[2275]: I1216 12:37:18.995242 2275 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:37:18.995308 kubelet[2275]: I1216 12:37:18.995260 2275 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:37:18.995753 kubelet[2275]: I1216 12:37:18.995501 2275 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:37:18.996339 kubelet[2275]: E1216 12:37:18.996222 2275 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:37:18.996339 kubelet[2275]: E1216 12:37:18.996264 2275 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:37:19.071913 systemd[1]: Created slice kubepods-burstable-poda828a5aef94a797d5bf53b7888f8afc4.slice - libcontainer container kubepods-burstable-poda828a5aef94a797d5bf53b7888f8afc4.slice. Dec 16 12:37:19.079850 kubelet[2275]: E1216 12:37:19.079546 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:19.080642 kubelet[2275]: I1216 12:37:19.080608 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:19.080712 kubelet[2275]: I1216 12:37:19.080641 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:19.080712 kubelet[2275]: I1216 12:37:19.080661 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:19.080712 kubelet[2275]: I1216 12:37:19.080675 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:19.080712 kubelet[2275]: I1216 12:37:19.080690 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:19.080712 kubelet[2275]: I1216 12:37:19.080703 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:19.080807 kubelet[2275]: I1216 12:37:19.080718 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:19.080807 kubelet[2275]: I1216 12:37:19.080733 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:19.080807 kubelet[2275]: I1216 12:37:19.080751 2275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:19.081938 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 12:37:19.090172 kubelet[2275]: E1216 12:37:19.090025 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:19.092665 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 12:37:19.094326 kubelet[2275]: E1216 12:37:19.094276 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:19.096596 kubelet[2275]: I1216 12:37:19.096575 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:37:19.097034 kubelet[2275]: E1216 12:37:19.097009 2275 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 16 12:37:19.280680 kubelet[2275]: E1216 12:37:19.280535 2275 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.86:6443: connect: connection refused" interval="800ms" Dec 16 12:37:19.298762 kubelet[2275]: I1216 12:37:19.298714 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:37:19.299123 kubelet[2275]: E1216 12:37:19.299085 2275 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.86:6443/api/v1/nodes\": dial tcp 10.0.0.86:6443: connect: connection refused" node="localhost" Dec 16 12:37:19.381190 containerd[1499]: time="2025-12-16T12:37:19.381141653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a828a5aef94a797d5bf53b7888f8afc4,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:19.390906 containerd[1499]: time="2025-12-16T12:37:19.390865442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:19.396464 containerd[1499]: time="2025-12-16T12:37:19.396045052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:19.401560 containerd[1499]: time="2025-12-16T12:37:19.401512427Z" level=info msg="connecting to shim 397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155" address="unix:///run/containerd/s/5e423d0b2234995ef8bf8b91d4532af8d61a6b5ae94e9cbd7db753557872f2f1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:19.421621 containerd[1499]: time="2025-12-16T12:37:19.420611380Z" level=info msg="connecting to shim 1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2" address="unix:///run/containerd/s/fc88d59657d33d0ac6a2ef8fe184e7de8583e920662ee9c3bf5dbce7df29b553" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:19.433718 containerd[1499]: time="2025-12-16T12:37:19.433667028Z" level=info msg="connecting to shim 66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c" address="unix:///run/containerd/s/ab7e70ccee9b0585d6cb854a895c3285abc40534f6844b8ed35d882ba564aad7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:19.435051 systemd[1]: Started cri-containerd-397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155.scope - libcontainer container 397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155. Dec 16 12:37:19.454007 systemd[1]: Started cri-containerd-1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2.scope - libcontainer container 1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2. Dec 16 12:37:19.457318 systemd[1]: Started cri-containerd-66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c.scope - libcontainer container 66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c. Dec 16 12:37:19.500314 containerd[1499]: time="2025-12-16T12:37:19.500263337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a828a5aef94a797d5bf53b7888f8afc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155\"" Dec 16 12:37:19.501180 containerd[1499]: time="2025-12-16T12:37:19.501092329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2\"" Dec 16 12:37:19.503604 containerd[1499]: time="2025-12-16T12:37:19.503562691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c\"" Dec 16 12:37:19.503910 containerd[1499]: time="2025-12-16T12:37:19.503875095Z" level=info msg="CreateContainer within sandbox \"397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:37:19.505067 containerd[1499]: time="2025-12-16T12:37:19.504617270Z" level=info msg="CreateContainer within sandbox \"1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:37:19.505706 containerd[1499]: time="2025-12-16T12:37:19.505681433Z" level=info msg="CreateContainer within sandbox \"66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:37:19.514693 containerd[1499]: time="2025-12-16T12:37:19.514650947Z" level=info msg="Container ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:19.517992 containerd[1499]: time="2025-12-16T12:37:19.517950341Z" level=info msg="Container 32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:19.519710 containerd[1499]: time="2025-12-16T12:37:19.519680205Z" level=info msg="Container 0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:19.524953 containerd[1499]: time="2025-12-16T12:37:19.524913008Z" level=info msg="CreateContainer within sandbox \"397f9c082c49427f141cafe9449632d6881cd302413de23aa851292d10909155\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41\"" Dec 16 12:37:19.525602 containerd[1499]: time="2025-12-16T12:37:19.525567687Z" level=info msg="StartContainer for \"ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41\"" Dec 16 12:37:19.526752 containerd[1499]: time="2025-12-16T12:37:19.526727692Z" level=info msg="connecting to shim ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41" address="unix:///run/containerd/s/5e423d0b2234995ef8bf8b91d4532af8d61a6b5ae94e9cbd7db753557872f2f1" protocol=ttrpc version=3 Dec 16 12:37:19.527264 containerd[1499]: time="2025-12-16T12:37:19.527231261Z" level=info msg="CreateContainer within sandbox \"1d94c3106b1eb4cd27330e4ab2aee5ebd6efd37bfbbb7b5877b6b88fdf161fe2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c\"" Dec 16 12:37:19.528065 containerd[1499]: time="2025-12-16T12:37:19.527650529Z" level=info msg="StartContainer for \"32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c\"" Dec 16 12:37:19.528243 containerd[1499]: time="2025-12-16T12:37:19.528170870Z" level=info msg="CreateContainer within sandbox \"66b3e4339e5cdacd5abba55b4922fbe29efd57e5db03f15bf487c642a1011d4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf\"" Dec 16 12:37:19.528648 containerd[1499]: time="2025-12-16T12:37:19.528623283Z" level=info msg="StartContainer for \"0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf\"" Dec 16 12:37:19.528738 containerd[1499]: time="2025-12-16T12:37:19.528707224Z" level=info msg="connecting to shim 32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c" address="unix:///run/containerd/s/fc88d59657d33d0ac6a2ef8fe184e7de8583e920662ee9c3bf5dbce7df29b553" protocol=ttrpc version=3 Dec 16 12:37:19.530386 containerd[1499]: time="2025-12-16T12:37:19.530311257Z" level=info msg="connecting to shim 0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf" address="unix:///run/containerd/s/ab7e70ccee9b0585d6cb854a895c3285abc40534f6844b8ed35d882ba564aad7" protocol=ttrpc version=3 Dec 16 12:37:19.554058 systemd[1]: Started cri-containerd-32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c.scope - libcontainer container 32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c. Dec 16 12:37:19.555393 systemd[1]: Started cri-containerd-ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41.scope - libcontainer container ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41. Dec 16 12:37:19.559586 systemd[1]: Started cri-containerd-0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf.scope - libcontainer container 0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf. Dec 16 12:37:19.604209 containerd[1499]: time="2025-12-16T12:37:19.604043386Z" level=info msg="StartContainer for \"ce5c990b74520d531f3a3401edff83d892dd4b2fd8748ff17c84ce671e967d41\" returns successfully" Dec 16 12:37:19.617551 kubelet[2275]: W1216 12:37:19.617435 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:19.617551 kubelet[2275]: E1216 12:37:19.617510 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:19.624070 containerd[1499]: time="2025-12-16T12:37:19.624007511Z" level=info msg="StartContainer for \"32729096cfed0abdcc175983a79df057ecf029b2e0a745ea5400c3883fd5f01c\" returns successfully" Dec 16 12:37:19.624617 containerd[1499]: time="2025-12-16T12:37:19.624571101Z" level=info msg="StartContainer for \"0a5624e33765b28caea3535c2114c9f1ce2c39560b830975eeb13615a07eb0cf\" returns successfully" Dec 16 12:37:19.628305 kubelet[2275]: W1216 12:37:19.628240 2275 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.86:6443: connect: connection refused Dec 16 12:37:19.628735 kubelet[2275]: E1216 12:37:19.628685 2275 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.86:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:37:19.701411 kubelet[2275]: I1216 12:37:19.701375 2275 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:37:19.973830 kubelet[2275]: E1216 12:37:19.971301 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:19.974170 kubelet[2275]: E1216 12:37:19.973041 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:19.975192 kubelet[2275]: E1216 12:37:19.975169 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:20.976844 kubelet[2275]: E1216 12:37:20.976748 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:20.977199 kubelet[2275]: E1216 12:37:20.976757 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:21.108919 kubelet[2275]: E1216 12:37:21.108874 2275 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:37:21.156408 kubelet[2275]: I1216 12:37:21.156333 2275 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:37:21.156408 kubelet[2275]: E1216 12:37:21.156385 2275 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:37:21.175884 kubelet[2275]: E1216 12:37:21.174689 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.277296 kubelet[2275]: E1216 12:37:21.276871 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.377562 kubelet[2275]: E1216 12:37:21.377510 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.478195 kubelet[2275]: E1216 12:37:21.478141 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.578780 kubelet[2275]: E1216 12:37:21.578640 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.678830 kubelet[2275]: E1216 12:37:21.678751 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.779502 kubelet[2275]: E1216 12:37:21.779441 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.880005 kubelet[2275]: E1216 12:37:21.879953 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:21.981225 kubelet[2275]: E1216 12:37:21.981174 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.081838 kubelet[2275]: E1216 12:37:22.081768 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.182602 kubelet[2275]: E1216 12:37:22.182447 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.277586 kubelet[2275]: E1216 12:37:22.277540 2275 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:37:22.283106 kubelet[2275]: E1216 12:37:22.283053 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.383676 kubelet[2275]: E1216 12:37:22.383622 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.484057 kubelet[2275]: E1216 12:37:22.483905 2275 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:22.580030 kubelet[2275]: I1216 12:37:22.579994 2275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:22.594665 kubelet[2275]: I1216 12:37:22.594619 2275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:22.600042 kubelet[2275]: I1216 12:37:22.599995 2275 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:22.669022 kubelet[2275]: I1216 12:37:22.668981 2275 apiserver.go:52] "Watching apiserver" Dec 16 12:37:22.679221 kubelet[2275]: I1216 12:37:22.679162 2275 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:37:23.316641 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-7.scope)... Dec 16 12:37:23.316665 systemd[1]: Reloading... Dec 16 12:37:23.395860 zram_generator::config[2595]: No configuration found. Dec 16 12:37:23.569832 systemd[1]: Reloading finished in 252 ms. Dec 16 12:37:23.588409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:23.600949 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:37:23.601246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:23.601319 systemd[1]: kubelet.service: Consumed 1.671s CPU time, 128.9M memory peak. Dec 16 12:37:23.603502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:37:23.785972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:37:23.790538 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:37:23.829116 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:37:23.829116 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:37:23.829116 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:37:23.829427 kubelet[2635]: I1216 12:37:23.829101 2635 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:37:23.838631 kubelet[2635]: I1216 12:37:23.838579 2635 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:37:23.838631 kubelet[2635]: I1216 12:37:23.838617 2635 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:37:23.839294 kubelet[2635]: I1216 12:37:23.839200 2635 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:37:23.841220 kubelet[2635]: I1216 12:37:23.841188 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:37:23.843776 kubelet[2635]: I1216 12:37:23.843579 2635 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:37:23.848670 kubelet[2635]: I1216 12:37:23.848634 2635 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:37:23.851554 kubelet[2635]: I1216 12:37:23.851494 2635 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:37:23.851769 kubelet[2635]: I1216 12:37:23.851695 2635 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:37:23.852040 kubelet[2635]: I1216 12:37:23.851724 2635 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:37:23.852125 kubelet[2635]: I1216 12:37:23.852044 2635 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:37:23.852125 kubelet[2635]: I1216 12:37:23.852055 2635 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:37:23.852125 kubelet[2635]: I1216 12:37:23.852106 2635 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:37:23.852362 kubelet[2635]: I1216 12:37:23.852256 2635 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:37:23.852362 kubelet[2635]: I1216 12:37:23.852273 2635 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:37:23.852362 kubelet[2635]: I1216 12:37:23.852296 2635 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:37:23.852362 kubelet[2635]: I1216 12:37:23.852308 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:37:23.854251 kubelet[2635]: I1216 12:37:23.854224 2635 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:37:23.855407 kubelet[2635]: I1216 12:37:23.855387 2635 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:37:23.857350 kubelet[2635]: I1216 12:37:23.857322 2635 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:37:23.857536 kubelet[2635]: I1216 12:37:23.857513 2635 server.go:1287] "Started kubelet" Dec 16 12:37:23.859366 kubelet[2635]: I1216 12:37:23.858685 2635 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:37:23.862012 kubelet[2635]: I1216 12:37:23.860693 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:37:23.863223 kubelet[2635]: I1216 12:37:23.863157 2635 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:37:23.864208 kubelet[2635]: I1216 12:37:23.864151 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:37:23.864700 kubelet[2635]: I1216 12:37:23.864677 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:37:23.866208 kubelet[2635]: I1216 12:37:23.866176 2635 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:37:23.866417 kubelet[2635]: E1216 12:37:23.866312 2635 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:37:23.866521 kubelet[2635]: I1216 12:37:23.866500 2635 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:37:23.866673 kubelet[2635]: I1216 12:37:23.866634 2635 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:37:23.869662 kubelet[2635]: I1216 12:37:23.869621 2635 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:37:23.872199 kubelet[2635]: I1216 12:37:23.872167 2635 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:37:23.872450 kubelet[2635]: I1216 12:37:23.872425 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:37:23.878581 kubelet[2635]: I1216 12:37:23.878514 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:37:23.879638 kubelet[2635]: I1216 12:37:23.879593 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:37:23.879638 kubelet[2635]: I1216 12:37:23.879627 2635 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:37:23.879766 kubelet[2635]: I1216 12:37:23.879648 2635 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:37:23.879766 kubelet[2635]: I1216 12:37:23.879658 2635 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:37:23.879766 kubelet[2635]: E1216 12:37:23.879705 2635 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:37:23.891602 kubelet[2635]: I1216 12:37:23.890634 2635 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:37:23.893517 kubelet[2635]: E1216 12:37:23.893482 2635 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:37:23.928118 kubelet[2635]: I1216 12:37:23.928090 2635 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:37:23.928118 kubelet[2635]: I1216 12:37:23.928108 2635 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:37:23.928118 kubelet[2635]: I1216 12:37:23.928131 2635 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:37:23.928331 kubelet[2635]: I1216 12:37:23.928314 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:37:23.928363 kubelet[2635]: I1216 12:37:23.928329 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:37:23.928363 kubelet[2635]: I1216 12:37:23.928347 2635 policy_none.go:49] "None policy: Start" Dec 16 12:37:23.928363 kubelet[2635]: I1216 12:37:23.928356 2635 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:37:23.928422 kubelet[2635]: I1216 12:37:23.928365 2635 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:37:23.928470 kubelet[2635]: I1216 12:37:23.928459 2635 state_mem.go:75] "Updated machine memory state" Dec 16 12:37:23.932233 kubelet[2635]: I1216 12:37:23.932206 2635 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:37:23.932404 kubelet[2635]: I1216 12:37:23.932389 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:37:23.932446 kubelet[2635]: I1216 12:37:23.932406 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:37:23.932693 kubelet[2635]: I1216 12:37:23.932658 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:37:23.933659 kubelet[2635]: E1216 12:37:23.933632 2635 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:37:23.981735 kubelet[2635]: I1216 12:37:23.981698 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:23.981989 kubelet[2635]: I1216 12:37:23.981714 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:23.982396 kubelet[2635]: I1216 12:37:23.981879 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.005151 kubelet[2635]: E1216 12:37:24.005108 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.005151 kubelet[2635]: E1216 12:37:24.005148 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.005318 kubelet[2635]: E1216 12:37:24.005104 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:24.035060 kubelet[2635]: I1216 12:37:24.034993 2635 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:37:24.050656 kubelet[2635]: I1216 12:37:24.050619 2635 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:37:24.051221 kubelet[2635]: I1216 12:37:24.051137 2635 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:37:24.067849 kubelet[2635]: I1216 12:37:24.067778 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.067849 kubelet[2635]: I1216 12:37:24.067829 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.067849 kubelet[2635]: I1216 12:37:24.067848 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.068111 kubelet[2635]: I1216 12:37:24.067869 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:24.068111 kubelet[2635]: I1216 12:37:24.067887 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.068111 kubelet[2635]: I1216 12:37:24.067901 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a828a5aef94a797d5bf53b7888f8afc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a828a5aef94a797d5bf53b7888f8afc4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.068111 kubelet[2635]: I1216 12:37:24.067915 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.068111 kubelet[2635]: I1216 12:37:24.067932 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.068567 kubelet[2635]: I1216 12:37:24.067958 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:37:24.853542 kubelet[2635]: I1216 12:37:24.853498 2635 apiserver.go:52] "Watching apiserver" Dec 16 12:37:24.867419 kubelet[2635]: I1216 12:37:24.867352 2635 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:37:24.905947 kubelet[2635]: I1216 12:37:24.905897 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:24.906095 kubelet[2635]: I1216 12:37:24.905993 2635 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.912873 kubelet[2635]: E1216 12:37:24.912613 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:37:24.914011 kubelet[2635]: E1216 12:37:24.913883 2635 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:37:24.961594 kubelet[2635]: I1216 12:37:24.961506 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.9614850109999997 podStartE2EDuration="2.961485011s" podCreationTimestamp="2025-12-16 12:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:37:24.94887388 +0000 UTC m=+1.154946586" watchObservedRunningTime="2025-12-16 12:37:24.961485011 +0000 UTC m=+1.167557718" Dec 16 12:37:24.981885 kubelet[2635]: I1216 12:37:24.981808 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.981791632 podStartE2EDuration="2.981791632s" podCreationTimestamp="2025-12-16 12:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:37:24.96258556 +0000 UTC m=+1.168658267" watchObservedRunningTime="2025-12-16 12:37:24.981791632 +0000 UTC m=+1.187864379" Dec 16 12:37:25.002292 kubelet[2635]: I1216 12:37:25.002199 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.002178534 podStartE2EDuration="3.002178534s" podCreationTimestamp="2025-12-16 12:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:37:24.983600701 +0000 UTC m=+1.189673408" watchObservedRunningTime="2025-12-16 12:37:25.002178534 +0000 UTC m=+1.208251201" Dec 16 12:37:29.501787 kubelet[2635]: I1216 12:37:29.501636 2635 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:37:29.502793 kubelet[2635]: I1216 12:37:29.502189 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:37:29.502914 containerd[1499]: time="2025-12-16T12:37:29.501954541Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:37:30.064681 systemd[1]: Created slice kubepods-besteffort-podb5155668_d88d_4755_9f5e_50ae71e83c24.slice - libcontainer container kubepods-besteffort-podb5155668_d88d_4755_9f5e_50ae71e83c24.slice. Dec 16 12:37:30.107276 kubelet[2635]: I1216 12:37:30.107168 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5155668-d88d-4755-9f5e-50ae71e83c24-xtables-lock\") pod \"kube-proxy-bfwkq\" (UID: \"b5155668-d88d-4755-9f5e-50ae71e83c24\") " pod="kube-system/kube-proxy-bfwkq" Dec 16 12:37:30.107276 kubelet[2635]: I1216 12:37:30.107225 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5155668-d88d-4755-9f5e-50ae71e83c24-lib-modules\") pod \"kube-proxy-bfwkq\" (UID: \"b5155668-d88d-4755-9f5e-50ae71e83c24\") " pod="kube-system/kube-proxy-bfwkq" Dec 16 12:37:30.107276 kubelet[2635]: I1216 12:37:30.107248 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpgvn\" (UniqueName: \"kubernetes.io/projected/b5155668-d88d-4755-9f5e-50ae71e83c24-kube-api-access-jpgvn\") pod \"kube-proxy-bfwkq\" (UID: \"b5155668-d88d-4755-9f5e-50ae71e83c24\") " pod="kube-system/kube-proxy-bfwkq" Dec 16 12:37:30.108721 kubelet[2635]: I1216 12:37:30.107994 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5155668-d88d-4755-9f5e-50ae71e83c24-kube-proxy\") pod \"kube-proxy-bfwkq\" (UID: \"b5155668-d88d-4755-9f5e-50ae71e83c24\") " pod="kube-system/kube-proxy-bfwkq" Dec 16 12:37:30.216994 kubelet[2635]: E1216 12:37:30.216879 2635 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:37:30.216994 kubelet[2635]: E1216 12:37:30.216943 2635 projected.go:194] Error preparing data for projected volume kube-api-access-jpgvn for pod kube-system/kube-proxy-bfwkq: configmap "kube-root-ca.crt" not found Dec 16 12:37:30.217330 kubelet[2635]: E1216 12:37:30.217288 2635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b5155668-d88d-4755-9f5e-50ae71e83c24-kube-api-access-jpgvn podName:b5155668-d88d-4755-9f5e-50ae71e83c24 nodeName:}" failed. No retries permitted until 2025-12-16 12:37:30.717251042 +0000 UTC m=+6.923323749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jpgvn" (UniqueName: "kubernetes.io/projected/b5155668-d88d-4755-9f5e-50ae71e83c24-kube-api-access-jpgvn") pod "kube-proxy-bfwkq" (UID: "b5155668-d88d-4755-9f5e-50ae71e83c24") : configmap "kube-root-ca.crt" not found Dec 16 12:37:30.600366 systemd[1]: Created slice kubepods-besteffort-pod4c9ad1de_70c9_45ac_a9c5_b197f3d84c23.slice - libcontainer container kubepods-besteffort-pod4c9ad1de_70c9_45ac_a9c5_b197f3d84c23.slice. Dec 16 12:37:30.611643 kubelet[2635]: I1216 12:37:30.611575 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2fc\" (UniqueName: \"kubernetes.io/projected/4c9ad1de-70c9-45ac-a9c5-b197f3d84c23-kube-api-access-gs2fc\") pod \"tigera-operator-7dcd859c48-567xv\" (UID: \"4c9ad1de-70c9-45ac-a9c5-b197f3d84c23\") " pod="tigera-operator/tigera-operator-7dcd859c48-567xv" Dec 16 12:37:30.612036 kubelet[2635]: I1216 12:37:30.611669 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c9ad1de-70c9-45ac-a9c5-b197f3d84c23-var-lib-calico\") pod \"tigera-operator-7dcd859c48-567xv\" (UID: \"4c9ad1de-70c9-45ac-a9c5-b197f3d84c23\") " pod="tigera-operator/tigera-operator-7dcd859c48-567xv" Dec 16 12:37:30.904257 containerd[1499]: time="2025-12-16T12:37:30.904212509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-567xv,Uid:4c9ad1de-70c9-45ac-a9c5-b197f3d84c23,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:37:30.924897 containerd[1499]: time="2025-12-16T12:37:30.924843849Z" level=info msg="connecting to shim 3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e" address="unix:///run/containerd/s/7a53528f6874de4511fa26350abbe5cf9b11004c5ccb2f308cba95cee606cfa2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:30.957082 systemd[1]: Started cri-containerd-3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e.scope - libcontainer container 3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e. Dec 16 12:37:30.986473 containerd[1499]: time="2025-12-16T12:37:30.986422864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfwkq,Uid:b5155668-d88d-4755-9f5e-50ae71e83c24,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:30.995044 containerd[1499]: time="2025-12-16T12:37:30.994990015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-567xv,Uid:4c9ad1de-70c9-45ac-a9c5-b197f3d84c23,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e\"" Dec 16 12:37:30.997959 containerd[1499]: time="2025-12-16T12:37:30.997922810Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:37:31.007596 containerd[1499]: time="2025-12-16T12:37:31.007497016Z" level=info msg="connecting to shim 6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b" address="unix:///run/containerd/s/16dd35fcade443fb96fd2ad3ed27c948dfba7612e5e515d5657eae29314e583d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:31.040084 systemd[1]: Started cri-containerd-6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b.scope - libcontainer container 6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b. Dec 16 12:37:31.068002 containerd[1499]: time="2025-12-16T12:37:31.067937376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bfwkq,Uid:b5155668-d88d-4755-9f5e-50ae71e83c24,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b\"" Dec 16 12:37:31.071848 containerd[1499]: time="2025-12-16T12:37:31.071785036Z" level=info msg="CreateContainer within sandbox \"6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:37:31.083170 containerd[1499]: time="2025-12-16T12:37:31.083117226Z" level=info msg="Container a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:31.091894 containerd[1499]: time="2025-12-16T12:37:31.091800204Z" level=info msg="CreateContainer within sandbox \"6a8a233f07f476bbb74f70ee5815ed15a6074b4726fd272a87bda1a1c015bc4b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726\"" Dec 16 12:37:31.093224 containerd[1499]: time="2025-12-16T12:37:31.093023335Z" level=info msg="StartContainer for \"a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726\"" Dec 16 12:37:31.094866 containerd[1499]: time="2025-12-16T12:37:31.094830429Z" level=info msg="connecting to shim a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726" address="unix:///run/containerd/s/16dd35fcade443fb96fd2ad3ed27c948dfba7612e5e515d5657eae29314e583d" protocol=ttrpc version=3 Dec 16 12:37:31.117236 systemd[1]: Started cri-containerd-a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726.scope - libcontainer container a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726. Dec 16 12:37:31.206708 containerd[1499]: time="2025-12-16T12:37:31.206458450Z" level=info msg="StartContainer for \"a6d2ae4aa9f1a950cbbd6d18158d58b16567509dcc9aecf1729897d7eeb3a726\" returns successfully" Dec 16 12:37:31.948188 kubelet[2635]: I1216 12:37:31.948102 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bfwkq" podStartSLOduration=1.948055812 podStartE2EDuration="1.948055812s" podCreationTimestamp="2025-12-16 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:37:31.935494089 +0000 UTC m=+8.141566796" watchObservedRunningTime="2025-12-16 12:37:31.948055812 +0000 UTC m=+8.154128519" Dec 16 12:37:33.319733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108284336.mount: Deactivated successfully. Dec 16 12:37:36.822096 containerd[1499]: time="2025-12-16T12:37:36.822049149Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:36.823021 containerd[1499]: time="2025-12-16T12:37:36.822595688Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:37:36.823609 containerd[1499]: time="2025-12-16T12:37:36.823543909Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:36.825871 containerd[1499]: time="2025-12-16T12:37:36.825796870Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:36.826498 containerd[1499]: time="2025-12-16T12:37:36.826461181Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.828329701s" Dec 16 12:37:36.826568 containerd[1499]: time="2025-12-16T12:37:36.826499065Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:37:36.830272 containerd[1499]: time="2025-12-16T12:37:36.830233665Z" level=info msg="CreateContainer within sandbox \"3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:37:36.839786 containerd[1499]: time="2025-12-16T12:37:36.839733921Z" level=info msg="Container 7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:36.842477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548732029.mount: Deactivated successfully. Dec 16 12:37:36.847703 containerd[1499]: time="2025-12-16T12:37:36.847642168Z" level=info msg="CreateContainer within sandbox \"3f6ab1079523b7f2dadcdbd0aa0fae56aca038af8ff5ce02de12940e1399131e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115\"" Dec 16 12:37:36.848177 containerd[1499]: time="2025-12-16T12:37:36.848143941Z" level=info msg="StartContainer for \"7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115\"" Dec 16 12:37:36.849445 containerd[1499]: time="2025-12-16T12:37:36.849416397Z" level=info msg="connecting to shim 7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115" address="unix:///run/containerd/s/7a53528f6874de4511fa26350abbe5cf9b11004c5ccb2f308cba95cee606cfa2" protocol=ttrpc version=3 Dec 16 12:37:36.874044 systemd[1]: Started cri-containerd-7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115.scope - libcontainer container 7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115. Dec 16 12:37:36.901755 containerd[1499]: time="2025-12-16T12:37:36.901695911Z" level=info msg="StartContainer for \"7c8d84759e4670d748b67f6bd57c0d9b81b0c1f2dc4a49affe4df679e5ba8115\" returns successfully" Dec 16 12:37:36.944561 kubelet[2635]: I1216 12:37:36.944387 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-567xv" podStartSLOduration=1.11278794 podStartE2EDuration="6.944366996s" podCreationTimestamp="2025-12-16 12:37:30 +0000 UTC" firstStartedPulling="2025-12-16 12:37:30.996957427 +0000 UTC m=+7.203030134" lastFinishedPulling="2025-12-16 12:37:36.828536523 +0000 UTC m=+13.034609190" observedRunningTime="2025-12-16 12:37:36.944289228 +0000 UTC m=+13.150361975" watchObservedRunningTime="2025-12-16 12:37:36.944366996 +0000 UTC m=+13.150439703" Dec 16 12:37:39.447180 update_engine[1488]: I20251216 12:37:39.445906 1488 update_attempter.cc:509] Updating boot flags... Dec 16 12:37:42.442788 sudo[1720]: pam_unix(sudo:session): session closed for user root Dec 16 12:37:42.448331 sshd[1719]: Connection closed by 10.0.0.1 port 43162 Dec 16 12:37:42.449963 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 16 12:37:42.455193 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:37:42.455547 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:43162.service: Deactivated successfully. Dec 16 12:37:42.458620 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:37:42.458986 systemd[1]: session-7.scope: Consumed 6.741s CPU time, 222.6M memory peak. Dec 16 12:37:42.461546 systemd-logind[1482]: Removed session 7. Dec 16 12:37:49.505148 systemd[1]: Created slice kubepods-besteffort-pod7beb1a2e_15e1_4a97_8a64_9e7b470ba99b.slice - libcontainer container kubepods-besteffort-pod7beb1a2e_15e1_4a97_8a64_9e7b470ba99b.slice. Dec 16 12:37:49.534258 kubelet[2635]: I1216 12:37:49.534212 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7beb1a2e-15e1-4a97-8a64-9e7b470ba99b-typha-certs\") pod \"calico-typha-8569d748cd-lzmr7\" (UID: \"7beb1a2e-15e1-4a97-8a64-9e7b470ba99b\") " pod="calico-system/calico-typha-8569d748cd-lzmr7" Dec 16 12:37:49.534258 kubelet[2635]: I1216 12:37:49.534263 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkdpt\" (UniqueName: \"kubernetes.io/projected/7beb1a2e-15e1-4a97-8a64-9e7b470ba99b-kube-api-access-xkdpt\") pod \"calico-typha-8569d748cd-lzmr7\" (UID: \"7beb1a2e-15e1-4a97-8a64-9e7b470ba99b\") " pod="calico-system/calico-typha-8569d748cd-lzmr7" Dec 16 12:37:49.534763 kubelet[2635]: I1216 12:37:49.534285 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7beb1a2e-15e1-4a97-8a64-9e7b470ba99b-tigera-ca-bundle\") pod \"calico-typha-8569d748cd-lzmr7\" (UID: \"7beb1a2e-15e1-4a97-8a64-9e7b470ba99b\") " pod="calico-system/calico-typha-8569d748cd-lzmr7" Dec 16 12:37:49.678357 systemd[1]: Created slice kubepods-besteffort-pod0b11778b_a48b_4ad6_8e1a_6107b1f6bc31.slice - libcontainer container kubepods-besteffort-pod0b11778b_a48b_4ad6_8e1a_6107b1f6bc31.slice. Dec 16 12:37:49.735201 kubelet[2635]: I1216 12:37:49.735139 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fn9p\" (UniqueName: \"kubernetes.io/projected/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-kube-api-access-4fn9p\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735201 kubelet[2635]: I1216 12:37:49.735211 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-policysync\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735518 kubelet[2635]: I1216 12:37:49.735235 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-xtables-lock\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735518 kubelet[2635]: I1216 12:37:49.735254 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-cni-bin-dir\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735518 kubelet[2635]: I1216 12:37:49.735285 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-var-lib-calico\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735518 kubelet[2635]: I1216 12:37:49.735338 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-cni-log-dir\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735707 kubelet[2635]: I1216 12:37:49.735410 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-tigera-ca-bundle\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735707 kubelet[2635]: I1216 12:37:49.735678 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-lib-modules\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735836 kubelet[2635]: I1216 12:37:49.735802 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-node-certs\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.735923 kubelet[2635]: I1216 12:37:49.735908 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-cni-net-dir\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.736055 kubelet[2635]: I1216 12:37:49.735986 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-flexvol-driver-host\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.736055 kubelet[2635]: I1216 12:37:49.736006 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b11778b-a48b-4ad6-8e1a-6107b1f6bc31-var-run-calico\") pod \"calico-node-gkx5n\" (UID: \"0b11778b-a48b-4ad6-8e1a-6107b1f6bc31\") " pod="calico-system/calico-node-gkx5n" Dec 16 12:37:49.810232 containerd[1499]: time="2025-12-16T12:37:49.809735576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8569d748cd-lzmr7,Uid:7beb1a2e-15e1-4a97-8a64-9e7b470ba99b,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:49.838555 kubelet[2635]: E1216 12:37:49.838503 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.838555 kubelet[2635]: W1216 12:37:49.838525 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.839870 kubelet[2635]: E1216 12:37:49.838990 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.839870 kubelet[2635]: W1216 12:37:49.839387 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.839870 kubelet[2635]: E1216 12:37:49.839865 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.841694 kubelet[2635]: E1216 12:37:49.841129 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.841694 kubelet[2635]: W1216 12:37:49.841656 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.842072 kubelet[2635]: E1216 12:37:49.841963 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.842072 kubelet[2635]: E1216 12:37:49.842034 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.842809 kubelet[2635]: E1216 12:37:49.842782 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.842809 kubelet[2635]: W1216 12:37:49.842802 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.843077 kubelet[2635]: E1216 12:37:49.842927 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.843307 kubelet[2635]: E1216 12:37:49.843263 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.843307 kubelet[2635]: W1216 12:37:49.843300 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.843368 kubelet[2635]: E1216 12:37:49.843331 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.843503 kubelet[2635]: E1216 12:37:49.843482 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.843503 kubelet[2635]: W1216 12:37:49.843495 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.843565 kubelet[2635]: E1216 12:37:49.843524 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.844015 kubelet[2635]: E1216 12:37:49.843636 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.844015 kubelet[2635]: W1216 12:37:49.843671 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.844015 kubelet[2635]: E1216 12:37:49.843702 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.844015 kubelet[2635]: E1216 12:37:49.843849 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.844015 kubelet[2635]: W1216 12:37:49.843857 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.844015 kubelet[2635]: E1216 12:37:49.843888 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.844015 kubelet[2635]: E1216 12:37:49.844020 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.844206 kubelet[2635]: W1216 12:37:49.844029 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.844206 kubelet[2635]: E1216 12:37:49.844106 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.844250 kubelet[2635]: E1216 12:37:49.844234 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.844250 kubelet[2635]: W1216 12:37:49.844243 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.844297 kubelet[2635]: E1216 12:37:49.844261 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.844408 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.845891 kubelet[2635]: W1216 12:37:49.844420 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.844433 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.844994 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.845891 kubelet[2635]: W1216 12:37:49.845010 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.845133 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.845490 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.845891 kubelet[2635]: W1216 12:37:49.845504 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.845566 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.845891 kubelet[2635]: E1216 12:37:49.845679 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846115 kubelet[2635]: W1216 12:37:49.845689 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846115 kubelet[2635]: E1216 12:37:49.845741 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.846115 kubelet[2635]: E1216 12:37:49.845923 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846115 kubelet[2635]: W1216 12:37:49.845932 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846115 kubelet[2635]: E1216 12:37:49.846054 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.846221 kubelet[2635]: E1216 12:37:49.846145 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846221 kubelet[2635]: W1216 12:37:49.846153 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846221 kubelet[2635]: E1216 12:37:49.846212 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846332 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846885 kubelet[2635]: W1216 12:37:49.846346 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846414 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846572 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846885 kubelet[2635]: W1216 12:37:49.846581 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846619 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846764 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.846885 kubelet[2635]: W1216 12:37:49.846772 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.846885 kubelet[2635]: E1216 12:37:49.846849 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.846970 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.847085 kubelet[2635]: W1216 12:37:49.846978 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847060 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847116 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.847085 kubelet[2635]: W1216 12:37:49.847121 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847154 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847262 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.847085 kubelet[2635]: W1216 12:37:49.847268 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847362 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.847085 kubelet[2635]: E1216 12:37:49.847409 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.848501 kubelet[2635]: W1216 12:37:49.847415 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.848501 kubelet[2635]: E1216 12:37:49.847423 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.848501 kubelet[2635]: E1216 12:37:49.848183 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.848501 kubelet[2635]: W1216 12:37:49.848354 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.848501 kubelet[2635]: E1216 12:37:49.848479 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.848829 kubelet[2635]: E1216 12:37:49.848745 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.848829 kubelet[2635]: W1216 12:37:49.848774 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.848829 kubelet[2635]: E1216 12:37:49.848788 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.850210 kubelet[2635]: E1216 12:37:49.850185 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.850210 kubelet[2635]: W1216 12:37:49.850203 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.850297 kubelet[2635]: E1216 12:37:49.850217 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.867119 kubelet[2635]: E1216 12:37:49.867049 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:37:49.886686 kubelet[2635]: E1216 12:37:49.886626 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.886686 kubelet[2635]: W1216 12:37:49.886659 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.886686 kubelet[2635]: E1216 12:37:49.886680 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.909348 containerd[1499]: time="2025-12-16T12:37:49.909300604Z" level=info msg="connecting to shim c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7" address="unix:///run/containerd/s/d429ceacecb02a44c914da39621a3d338c5bb176d1ec783b2383f9af5aba86f2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:49.924127 kubelet[2635]: E1216 12:37:49.924058 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.924127 kubelet[2635]: W1216 12:37:49.924110 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.924127 kubelet[2635]: E1216 12:37:49.924134 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924398 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.925895 kubelet[2635]: W1216 12:37:49.924408 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924449 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924595 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.925895 kubelet[2635]: W1216 12:37:49.924602 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924610 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924740 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.925895 kubelet[2635]: W1216 12:37:49.924747 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924755 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.925895 kubelet[2635]: E1216 12:37:49.924917 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926092 kubelet[2635]: W1216 12:37:49.924925 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.924934 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.925115 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926092 kubelet[2635]: W1216 12:37:49.925123 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.925132 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.925349 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926092 kubelet[2635]: W1216 12:37:49.925359 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.925367 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926092 kubelet[2635]: E1216 12:37:49.925528 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926092 kubelet[2635]: W1216 12:37:49.925538 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.925552 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.925700 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926269 kubelet[2635]: W1216 12:37:49.925721 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.925732 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.925906 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926269 kubelet[2635]: W1216 12:37:49.925915 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.925924 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.926165 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926269 kubelet[2635]: W1216 12:37:49.926174 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926269 kubelet[2635]: E1216 12:37:49.926182 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.926977 kubelet[2635]: E1216 12:37:49.926876 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.926977 kubelet[2635]: W1216 12:37:49.926888 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.926977 kubelet[2635]: E1216 12:37:49.926899 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927284 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.928890 kubelet[2635]: W1216 12:37:49.927298 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927309 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927493 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.928890 kubelet[2635]: W1216 12:37:49.927501 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927510 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927641 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.928890 kubelet[2635]: W1216 12:37:49.927649 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927685 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.928890 kubelet[2635]: E1216 12:37:49.927848 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.929600 kubelet[2635]: W1216 12:37:49.927857 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.927865 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.928030 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.929600 kubelet[2635]: W1216 12:37:49.928038 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.928047 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.928171 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.929600 kubelet[2635]: W1216 12:37:49.928178 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.928187 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.929600 kubelet[2635]: E1216 12:37:49.928372 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.929600 kubelet[2635]: W1216 12:37:49.928381 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.930253 kubelet[2635]: E1216 12:37:49.928390 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.930253 kubelet[2635]: E1216 12:37:49.928564 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.930253 kubelet[2635]: W1216 12:37:49.928573 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.930253 kubelet[2635]: E1216 12:37:49.928581 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.938004 kubelet[2635]: E1216 12:37:49.937975 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.938004 kubelet[2635]: W1216 12:37:49.937997 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.938215 kubelet[2635]: E1216 12:37:49.938018 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.938215 kubelet[2635]: I1216 12:37:49.938047 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7fa23f6-6f8a-4051-b7ef-6f427f114167-kubelet-dir\") pod \"csi-node-driver-8sdm5\" (UID: \"d7fa23f6-6f8a-4051-b7ef-6f427f114167\") " pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:49.938403 kubelet[2635]: E1216 12:37:49.938233 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.938403 kubelet[2635]: W1216 12:37:49.938244 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.938403 kubelet[2635]: E1216 12:37:49.938256 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.938403 kubelet[2635]: I1216 12:37:49.938270 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7fa23f6-6f8a-4051-b7ef-6f427f114167-socket-dir\") pod \"csi-node-driver-8sdm5\" (UID: \"d7fa23f6-6f8a-4051-b7ef-6f427f114167\") " pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:49.938603 kubelet[2635]: E1216 12:37:49.938571 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.938603 kubelet[2635]: W1216 12:37:49.938603 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.938709 kubelet[2635]: E1216 12:37:49.938614 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.938709 kubelet[2635]: I1216 12:37:49.938631 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthcp\" (UniqueName: \"kubernetes.io/projected/d7fa23f6-6f8a-4051-b7ef-6f427f114167-kube-api-access-fthcp\") pod \"csi-node-driver-8sdm5\" (UID: \"d7fa23f6-6f8a-4051-b7ef-6f427f114167\") " pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:49.939410 kubelet[2635]: E1216 12:37:49.939357 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.939410 kubelet[2635]: W1216 12:37:49.939401 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.939410 kubelet[2635]: E1216 12:37:49.939421 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.939410 kubelet[2635]: I1216 12:37:49.939469 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7fa23f6-6f8a-4051-b7ef-6f427f114167-registration-dir\") pod \"csi-node-driver-8sdm5\" (UID: \"d7fa23f6-6f8a-4051-b7ef-6f427f114167\") " pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:49.940445 kubelet[2635]: E1216 12:37:49.940251 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.940844 kubelet[2635]: W1216 12:37:49.940592 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.940844 kubelet[2635]: E1216 12:37:49.940628 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.941959 kubelet[2635]: E1216 12:37:49.941855 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.942158 kubelet[2635]: W1216 12:37:49.942139 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.942605 kubelet[2635]: E1216 12:37:49.942370 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.943474 kubelet[2635]: E1216 12:37:49.943045 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.943474 kubelet[2635]: W1216 12:37:49.943061 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.943744 kubelet[2635]: E1216 12:37:49.943591 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.943985 kubelet[2635]: E1216 12:37:49.943915 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.943985 kubelet[2635]: W1216 12:37:49.943941 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944058 kubelet[2635]: E1216 12:37:49.944036 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.944243 kubelet[2635]: E1216 12:37:49.944124 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.944243 kubelet[2635]: W1216 12:37:49.944135 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944243 kubelet[2635]: E1216 12:37:49.944237 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.944243 kubelet[2635]: W1216 12:37:49.944243 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944595 kubelet[2635]: E1216 12:37:49.944348 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.944595 kubelet[2635]: W1216 12:37:49.944359 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944595 kubelet[2635]: E1216 12:37:49.944369 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.944595 kubelet[2635]: E1216 12:37:49.944506 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.944595 kubelet[2635]: W1216 12:37:49.944514 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944595 kubelet[2635]: E1216 12:37:49.944521 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.944595 kubelet[2635]: E1216 12:37:49.944532 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.944595 kubelet[2635]: I1216 12:37:49.944553 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7fa23f6-6f8a-4051-b7ef-6f427f114167-varrun\") pod \"csi-node-driver-8sdm5\" (UID: \"d7fa23f6-6f8a-4051-b7ef-6f427f114167\") " pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:49.944792 kubelet[2635]: E1216 12:37:49.944736 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.944792 kubelet[2635]: W1216 12:37:49.944747 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.944792 kubelet[2635]: E1216 12:37:49.944756 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.945327 kubelet[2635]: E1216 12:37:49.944851 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.945327 kubelet[2635]: E1216 12:37:49.944898 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.945327 kubelet[2635]: W1216 12:37:49.944906 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.945327 kubelet[2635]: E1216 12:37:49.944913 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.945327 kubelet[2635]: E1216 12:37:49.945043 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:49.945327 kubelet[2635]: W1216 12:37:49.945050 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:49.945327 kubelet[2635]: E1216 12:37:49.945058 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:49.974084 systemd[1]: Started cri-containerd-c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7.scope - libcontainer container c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7. Dec 16 12:37:49.982053 containerd[1499]: time="2025-12-16T12:37:49.981959530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gkx5n,Uid:0b11778b-a48b-4ad6-8e1a-6107b1f6bc31,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:50.006721 containerd[1499]: time="2025-12-16T12:37:50.006672934Z" level=info msg="connecting to shim 96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9" address="unix:///run/containerd/s/1b626b1f8136ce5287e82e2762215c80d213ec29aebd88d423a6095ebcb4e6de" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:37:50.039235 systemd[1]: Started cri-containerd-96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9.scope - libcontainer container 96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9. Dec 16 12:37:50.046215 kubelet[2635]: E1216 12:37:50.046068 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.046490 kubelet[2635]: W1216 12:37:50.046409 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.046639 kubelet[2635]: E1216 12:37:50.046602 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.047737 kubelet[2635]: E1216 12:37:50.047545 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.047737 kubelet[2635]: W1216 12:37:50.047565 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.047737 kubelet[2635]: E1216 12:37:50.047589 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.047982 kubelet[2635]: E1216 12:37:50.047962 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.048037 kubelet[2635]: W1216 12:37:50.048025 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.048110 kubelet[2635]: E1216 12:37:50.048093 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.049928 kubelet[2635]: E1216 12:37:50.048410 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.049928 kubelet[2635]: W1216 12:37:50.049918 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.050011 kubelet[2635]: E1216 12:37:50.049945 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.050210 kubelet[2635]: E1216 12:37:50.050190 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.050210 kubelet[2635]: W1216 12:37:50.050203 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.050288 kubelet[2635]: E1216 12:37:50.050261 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.050397 kubelet[2635]: E1216 12:37:50.050380 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.050397 kubelet[2635]: W1216 12:37:50.050391 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.050458 kubelet[2635]: E1216 12:37:50.050437 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.050577 kubelet[2635]: E1216 12:37:50.050554 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.050577 kubelet[2635]: W1216 12:37:50.050566 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.050629 kubelet[2635]: E1216 12:37:50.050580 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.050856 kubelet[2635]: E1216 12:37:50.050836 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.050856 kubelet[2635]: W1216 12:37:50.050850 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.050948 kubelet[2635]: E1216 12:37:50.050865 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.051134 kubelet[2635]: E1216 12:37:50.051109 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.051134 kubelet[2635]: W1216 12:37:50.051124 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.051203 kubelet[2635]: E1216 12:37:50.051149 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.051355 kubelet[2635]: E1216 12:37:50.051337 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.051355 kubelet[2635]: W1216 12:37:50.051350 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.051355 kubelet[2635]: E1216 12:37:50.051363 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.051538 kubelet[2635]: E1216 12:37:50.051519 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.051538 kubelet[2635]: W1216 12:37:50.051531 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.051626 kubelet[2635]: E1216 12:37:50.051558 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.051723 kubelet[2635]: E1216 12:37:50.051706 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.051723 kubelet[2635]: W1216 12:37:50.051719 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.051791 kubelet[2635]: E1216 12:37:50.051748 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.051926 kubelet[2635]: E1216 12:37:50.051900 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.052040 kubelet[2635]: W1216 12:37:50.051915 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.052040 kubelet[2635]: E1216 12:37:50.051978 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.052450 kubelet[2635]: E1216 12:37:50.052430 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.052450 kubelet[2635]: W1216 12:37:50.052446 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.052574 kubelet[2635]: E1216 12:37:50.052551 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.053330 kubelet[2635]: E1216 12:37:50.053288 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.053330 kubelet[2635]: W1216 12:37:50.053307 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.053424 kubelet[2635]: E1216 12:37:50.053402 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.053589 kubelet[2635]: E1216 12:37:50.053574 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.053589 kubelet[2635]: W1216 12:37:50.053586 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.053901 kubelet[2635]: E1216 12:37:50.053695 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.053901 kubelet[2635]: E1216 12:37:50.053779 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.053901 kubelet[2635]: W1216 12:37:50.053786 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.054022 kubelet[2635]: E1216 12:37:50.053812 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.054699 kubelet[2635]: E1216 12:37:50.054668 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.054699 kubelet[2635]: W1216 12:37:50.054705 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.055028 kubelet[2635]: E1216 12:37:50.054769 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.055028 kubelet[2635]: E1216 12:37:50.055004 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.055028 kubelet[2635]: W1216 12:37:50.055015 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.055864 kubelet[2635]: E1216 12:37:50.055042 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.055864 kubelet[2635]: E1216 12:37:50.055227 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.055864 kubelet[2635]: W1216 12:37:50.055238 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.055864 kubelet[2635]: E1216 12:37:50.055335 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.055864 kubelet[2635]: E1216 12:37:50.055607 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.055864 kubelet[2635]: W1216 12:37:50.055629 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.055864 kubelet[2635]: E1216 12:37:50.055709 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.056063 kubelet[2635]: E1216 12:37:50.055990 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.056063 kubelet[2635]: W1216 12:37:50.056003 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.056101 kubelet[2635]: E1216 12:37:50.056073 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.056578 kubelet[2635]: E1216 12:37:50.056493 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.056629 kubelet[2635]: W1216 12:37:50.056576 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.056671 kubelet[2635]: E1216 12:37:50.056634 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.057053 kubelet[2635]: E1216 12:37:50.057034 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.057053 kubelet[2635]: W1216 12:37:50.057050 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.057120 kubelet[2635]: E1216 12:37:50.057068 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.057369 kubelet[2635]: E1216 12:37:50.057326 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.057427 kubelet[2635]: W1216 12:37:50.057367 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.057427 kubelet[2635]: E1216 12:37:50.057399 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.062086 containerd[1499]: time="2025-12-16T12:37:50.061959733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8569d748cd-lzmr7,Uid:7beb1a2e-15e1-4a97-8a64-9e7b470ba99b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7\"" Dec 16 12:37:50.073104 containerd[1499]: time="2025-12-16T12:37:50.073006020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:37:50.074293 kubelet[2635]: E1216 12:37:50.074222 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:50.074293 kubelet[2635]: W1216 12:37:50.074274 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:50.074293 kubelet[2635]: E1216 12:37:50.074294 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:50.101491 containerd[1499]: time="2025-12-16T12:37:50.101430223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gkx5n,Uid:0b11778b-a48b-4ad6-8e1a-6107b1f6bc31,Namespace:calico-system,Attempt:0,} returns sandbox id \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\"" Dec 16 12:37:51.255074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115367315.mount: Deactivated successfully. Dec 16 12:37:51.608052 containerd[1499]: time="2025-12-16T12:37:51.607992019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:51.609044 containerd[1499]: time="2025-12-16T12:37:51.609019234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:37:51.610057 containerd[1499]: time="2025-12-16T12:37:51.610034047Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:51.612145 containerd[1499]: time="2025-12-16T12:37:51.612106716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:51.613237 containerd[1499]: time="2025-12-16T12:37:51.612878197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.539829215s" Dec 16 12:37:51.613237 containerd[1499]: time="2025-12-16T12:37:51.612909279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:37:51.615829 containerd[1499]: time="2025-12-16T12:37:51.615782110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:37:51.634624 containerd[1499]: time="2025-12-16T12:37:51.634177041Z" level=info msg="CreateContainer within sandbox \"c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:37:51.654986 containerd[1499]: time="2025-12-16T12:37:51.654931056Z" level=info msg="Container 8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:51.662193 containerd[1499]: time="2025-12-16T12:37:51.662149237Z" level=info msg="CreateContainer within sandbox \"c4a21b373defe5088c01f9cd58804a4f67749245abf00566777c8cf009fe56a7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43\"" Dec 16 12:37:51.666841 containerd[1499]: time="2025-12-16T12:37:51.665510174Z" level=info msg="StartContainer for \"8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43\"" Dec 16 12:37:51.666841 containerd[1499]: time="2025-12-16T12:37:51.666707837Z" level=info msg="connecting to shim 8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43" address="unix:///run/containerd/s/d429ceacecb02a44c914da39621a3d338c5bb176d1ec783b2383f9af5aba86f2" protocol=ttrpc version=3 Dec 16 12:37:51.694027 systemd[1]: Started cri-containerd-8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43.scope - libcontainer container 8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43. Dec 16 12:37:51.741254 containerd[1499]: time="2025-12-16T12:37:51.741214568Z" level=info msg="StartContainer for \"8acaf826713f2ebd6d183042102ec01dd28b483f3f736e26131eba1fda5e4d43\" returns successfully" Dec 16 12:37:51.883354 kubelet[2635]: E1216 12:37:51.882251 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:37:51.990919 kubelet[2635]: I1216 12:37:51.990856 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8569d748cd-lzmr7" podStartSLOduration=1.446962667 podStartE2EDuration="2.990812057s" podCreationTimestamp="2025-12-16 12:37:49 +0000 UTC" firstStartedPulling="2025-12-16 12:37:50.071768232 +0000 UTC m=+26.277840939" lastFinishedPulling="2025-12-16 12:37:51.615617622 +0000 UTC m=+27.821690329" observedRunningTime="2025-12-16 12:37:51.990720132 +0000 UTC m=+28.196792839" watchObservedRunningTime="2025-12-16 12:37:51.990812057 +0000 UTC m=+28.196884764" Dec 16 12:37:52.043586 kubelet[2635]: E1216 12:37:52.043454 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.043586 kubelet[2635]: W1216 12:37:52.043477 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.051304 kubelet[2635]: E1216 12:37:52.051263 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.051852 kubelet[2635]: E1216 12:37:52.051814 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.051941 kubelet[2635]: W1216 12:37:52.051924 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.052035 kubelet[2635]: E1216 12:37:52.052023 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.052282 kubelet[2635]: E1216 12:37:52.052270 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.052365 kubelet[2635]: W1216 12:37:52.052352 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.052425 kubelet[2635]: E1216 12:37:52.052414 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.052743 kubelet[2635]: E1216 12:37:52.052630 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.052743 kubelet[2635]: W1216 12:37:52.052641 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.052743 kubelet[2635]: E1216 12:37:52.052651 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.052953 kubelet[2635]: E1216 12:37:52.052939 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.053013 kubelet[2635]: W1216 12:37:52.053002 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.053081 kubelet[2635]: E1216 12:37:52.053062 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.053361 kubelet[2635]: E1216 12:37:52.053267 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.053361 kubelet[2635]: W1216 12:37:52.053279 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.053361 kubelet[2635]: E1216 12:37:52.053288 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.053521 kubelet[2635]: E1216 12:37:52.053509 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.053574 kubelet[2635]: W1216 12:37:52.053564 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.053630 kubelet[2635]: E1216 12:37:52.053620 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.054033 kubelet[2635]: E1216 12:37:52.053906 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.054033 kubelet[2635]: W1216 12:37:52.053921 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.054033 kubelet[2635]: E1216 12:37:52.053931 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.054199 kubelet[2635]: E1216 12:37:52.054186 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.054356 kubelet[2635]: W1216 12:37:52.054242 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.054356 kubelet[2635]: E1216 12:37:52.054257 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.054484 kubelet[2635]: E1216 12:37:52.054472 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.054613 kubelet[2635]: W1216 12:37:52.054523 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.054613 kubelet[2635]: E1216 12:37:52.054537 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.054733 kubelet[2635]: E1216 12:37:52.054721 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.054909 kubelet[2635]: W1216 12:37:52.054789 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.054909 kubelet[2635]: E1216 12:37:52.054809 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.055058 kubelet[2635]: E1216 12:37:52.055046 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.055125 kubelet[2635]: W1216 12:37:52.055113 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.055176 kubelet[2635]: E1216 12:37:52.055166 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.055474 kubelet[2635]: E1216 12:37:52.055369 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.055474 kubelet[2635]: W1216 12:37:52.055381 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.055474 kubelet[2635]: E1216 12:37:52.055393 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.055641 kubelet[2635]: E1216 12:37:52.055629 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.055695 kubelet[2635]: W1216 12:37:52.055685 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.055743 kubelet[2635]: E1216 12:37:52.055735 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.056033 kubelet[2635]: E1216 12:37:52.055953 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.056033 kubelet[2635]: W1216 12:37:52.055965 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.056033 kubelet[2635]: E1216 12:37:52.055974 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.064392 kubelet[2635]: E1216 12:37:52.064355 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.064441 kubelet[2635]: W1216 12:37:52.064397 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.064441 kubelet[2635]: E1216 12:37:52.064414 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.064642 kubelet[2635]: E1216 12:37:52.064630 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.064642 kubelet[2635]: W1216 12:37:52.064642 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.064714 kubelet[2635]: E1216 12:37:52.064658 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.064954 kubelet[2635]: E1216 12:37:52.064921 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.064954 kubelet[2635]: W1216 12:37:52.064933 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.064954 kubelet[2635]: E1216 12:37:52.064949 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.065122 kubelet[2635]: E1216 12:37:52.065111 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.065152 kubelet[2635]: W1216 12:37:52.065123 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.065152 kubelet[2635]: E1216 12:37:52.065139 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.065279 kubelet[2635]: E1216 12:37:52.065267 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.065279 kubelet[2635]: W1216 12:37:52.065277 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.065344 kubelet[2635]: E1216 12:37:52.065290 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.065427 kubelet[2635]: E1216 12:37:52.065416 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.065459 kubelet[2635]: W1216 12:37:52.065426 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.065459 kubelet[2635]: E1216 12:37:52.065439 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.065749 kubelet[2635]: E1216 12:37:52.065647 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.065749 kubelet[2635]: W1216 12:37:52.065661 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.065749 kubelet[2635]: E1216 12:37:52.065689 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.065962 kubelet[2635]: E1216 12:37:52.065948 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.066027 kubelet[2635]: W1216 12:37:52.066016 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.066167 kubelet[2635]: E1216 12:37:52.066138 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.066261 kubelet[2635]: E1216 12:37:52.066249 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.066316 kubelet[2635]: W1216 12:37:52.066305 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.066383 kubelet[2635]: E1216 12:37:52.066367 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.066558 kubelet[2635]: E1216 12:37:52.066546 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.066623 kubelet[2635]: W1216 12:37:52.066613 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.066692 kubelet[2635]: E1216 12:37:52.066680 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.066975 kubelet[2635]: E1216 12:37:52.066954 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.066975 kubelet[2635]: W1216 12:37:52.066968 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.067049 kubelet[2635]: E1216 12:37:52.066985 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.067227 kubelet[2635]: E1216 12:37:52.067129 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.067227 kubelet[2635]: W1216 12:37:52.067139 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.067227 kubelet[2635]: E1216 12:37:52.067149 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.067370 kubelet[2635]: E1216 12:37:52.067357 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.067420 kubelet[2635]: W1216 12:37:52.067410 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.067492 kubelet[2635]: E1216 12:37:52.067470 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.067712 kubelet[2635]: E1216 12:37:52.067697 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.067712 kubelet[2635]: W1216 12:37:52.067712 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.067805 kubelet[2635]: E1216 12:37:52.067727 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.068150 kubelet[2635]: E1216 12:37:52.068114 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.068150 kubelet[2635]: W1216 12:37:52.068130 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.068150 kubelet[2635]: E1216 12:37:52.068146 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.068321 kubelet[2635]: E1216 12:37:52.068308 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.068321 kubelet[2635]: W1216 12:37:52.068320 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.068385 kubelet[2635]: E1216 12:37:52.068329 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.068513 kubelet[2635]: E1216 12:37:52.068497 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.068513 kubelet[2635]: W1216 12:37:52.068510 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.068562 kubelet[2635]: E1216 12:37:52.068519 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.069203 kubelet[2635]: E1216 12:37:52.069186 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:37:52.069237 kubelet[2635]: W1216 12:37:52.069203 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:37:52.069237 kubelet[2635]: E1216 12:37:52.069216 2635 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:37:52.752318 containerd[1499]: time="2025-12-16T12:37:52.752264910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:52.753459 containerd[1499]: time="2025-12-16T12:37:52.753431490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:37:52.754706 containerd[1499]: time="2025-12-16T12:37:52.754414059Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:52.756700 containerd[1499]: time="2025-12-16T12:37:52.756666854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:52.757352 containerd[1499]: time="2025-12-16T12:37:52.757326487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.141504975s" Dec 16 12:37:52.757545 containerd[1499]: time="2025-12-16T12:37:52.757355208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:37:52.759439 containerd[1499]: time="2025-12-16T12:37:52.759408472Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:37:52.767848 containerd[1499]: time="2025-12-16T12:37:52.767464761Z" level=info msg="Container df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:52.775520 containerd[1499]: time="2025-12-16T12:37:52.775486927Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed\"" Dec 16 12:37:52.777361 containerd[1499]: time="2025-12-16T12:37:52.777328941Z" level=info msg="StartContainer for \"df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed\"" Dec 16 12:37:52.778753 containerd[1499]: time="2025-12-16T12:37:52.778728012Z" level=info msg="connecting to shim df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed" address="unix:///run/containerd/s/1b626b1f8136ce5287e82e2762215c80d213ec29aebd88d423a6095ebcb4e6de" protocol=ttrpc version=3 Dec 16 12:37:52.804993 systemd[1]: Started cri-containerd-df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed.scope - libcontainer container df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed. Dec 16 12:37:52.879912 containerd[1499]: time="2025-12-16T12:37:52.879872098Z" level=info msg="StartContainer for \"df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed\" returns successfully" Dec 16 12:37:52.892764 systemd[1]: cri-containerd-df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed.scope: Deactivated successfully. Dec 16 12:37:52.933899 containerd[1499]: time="2025-12-16T12:37:52.926865080Z" level=info msg="received container exit event container_id:\"df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed\" id:\"df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed\" pid:3356 exited_at:{seconds:1765888672 nanos:913039019}" Dec 16 12:37:52.967743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df2d52e3d70017c78bb6264565fa6a780476f725ecb832c58487570f8d350aed-rootfs.mount: Deactivated successfully. Dec 16 12:37:52.984445 kubelet[2635]: I1216 12:37:52.984387 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:37:53.881411 kubelet[2635]: E1216 12:37:53.881354 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:37:53.989359 containerd[1499]: time="2025-12-16T12:37:53.989315084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:37:55.880858 kubelet[2635]: E1216 12:37:55.880730 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:37:56.667511 containerd[1499]: time="2025-12-16T12:37:56.667461024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:56.668127 containerd[1499]: time="2025-12-16T12:37:56.668092691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:37:56.668955 containerd[1499]: time="2025-12-16T12:37:56.668929528Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:56.671073 containerd[1499]: time="2025-12-16T12:37:56.671033459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:37:56.671891 containerd[1499]: time="2025-12-16T12:37:56.671862456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.682504369s" Dec 16 12:37:56.671942 containerd[1499]: time="2025-12-16T12:37:56.671895617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:37:56.674542 containerd[1499]: time="2025-12-16T12:37:56.674506571Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:37:56.682347 containerd[1499]: time="2025-12-16T12:37:56.681999858Z" level=info msg="Container bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:37:56.693756 containerd[1499]: time="2025-12-16T12:37:56.693619324Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5\"" Dec 16 12:37:56.694179 containerd[1499]: time="2025-12-16T12:37:56.694147947Z" level=info msg="StartContainer for \"bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5\"" Dec 16 12:37:56.695849 containerd[1499]: time="2025-12-16T12:37:56.695805659Z" level=info msg="connecting to shim bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5" address="unix:///run/containerd/s/1b626b1f8136ce5287e82e2762215c80d213ec29aebd88d423a6095ebcb4e6de" protocol=ttrpc version=3 Dec 16 12:37:56.714074 systemd[1]: Started cri-containerd-bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5.scope - libcontainer container bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5. Dec 16 12:37:56.804863 containerd[1499]: time="2025-12-16T12:37:56.804801852Z" level=info msg="StartContainer for \"bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5\" returns successfully" Dec 16 12:37:57.382905 systemd[1]: cri-containerd-bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5.scope: Deactivated successfully. Dec 16 12:37:57.383247 systemd[1]: cri-containerd-bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5.scope: Consumed 479ms CPU time, 173.8M memory peak, 1.8M read from disk, 165.9M written to disk. Dec 16 12:37:57.385667 containerd[1499]: time="2025-12-16T12:37:57.385614555Z" level=info msg="received container exit event container_id:\"bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5\" id:\"bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5\" pid:3415 exited_at:{seconds:1765888677 nanos:385247740}" Dec 16 12:37:57.405365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb36f04475db34ea728f69d332a7a19fca40b7d4bcd2f0d3fa14b8e5aeb934e5-rootfs.mount: Deactivated successfully. Dec 16 12:37:57.469622 kubelet[2635]: I1216 12:37:57.469593 2635 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:37:57.532143 systemd[1]: Created slice kubepods-besteffort-pod7daf5ab5_6622_4a5d_ba28_10d8ac9c5df7.slice - libcontainer container kubepods-besteffort-pod7daf5ab5_6622_4a5d_ba28_10d8ac9c5df7.slice. Dec 16 12:37:57.540637 systemd[1]: Created slice kubepods-besteffort-pod12f0abbf_eb53_4a17_a66e_13f45ba59ea7.slice - libcontainer container kubepods-besteffort-pod12f0abbf_eb53_4a17_a66e_13f45ba59ea7.slice. Dec 16 12:37:57.563885 systemd[1]: Created slice kubepods-besteffort-pod58c1d842_0739_4b32_93b5_118e08e0afe6.slice - libcontainer container kubepods-besteffort-pod58c1d842_0739_4b32_93b5_118e08e0afe6.slice. Dec 16 12:37:57.568146 systemd[1]: Created slice kubepods-besteffort-pod05b0c0cf_fa4e_4d16_bf34_9f5bf04ca880.slice - libcontainer container kubepods-besteffort-pod05b0c0cf_fa4e_4d16_bf34_9f5bf04ca880.slice. Dec 16 12:37:57.571092 systemd[1]: Created slice kubepods-burstable-podb10b6c17_28bd_4f18_aab7_57c174d58243.slice - libcontainer container kubepods-burstable-podb10b6c17_28bd_4f18_aab7_57c174d58243.slice. Dec 16 12:37:57.579626 systemd[1]: Created slice kubepods-besteffort-pod9321e5e2_d99b_4289_9374_6508cb7284e1.slice - libcontainer container kubepods-besteffort-pod9321e5e2_d99b_4289_9374_6508cb7284e1.slice. Dec 16 12:37:57.584159 systemd[1]: Created slice kubepods-burstable-pod656ca576_6f38_4a19_a24e_4f26795374d3.slice - libcontainer container kubepods-burstable-pod656ca576_6f38_4a19_a24e_4f26795374d3.slice. Dec 16 12:37:57.612134 kubelet[2635]: I1216 12:37:57.612081 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58c1d842-0739-4b32-93b5-118e08e0afe6-calico-apiserver-certs\") pod \"calico-apiserver-9d47ffd8c-qf5gh\" (UID: \"58c1d842-0739-4b32-93b5-118e08e0afe6\") " pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" Dec 16 12:37:57.612330 kubelet[2635]: I1216 12:37:57.612220 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxxm\" (UniqueName: \"kubernetes.io/projected/9321e5e2-d99b-4289-9374-6508cb7284e1-kube-api-access-2jxxm\") pod \"goldmane-666569f655-2h7z8\" (UID: \"9321e5e2-d99b-4289-9374-6508cb7284e1\") " pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:57.612330 kubelet[2635]: I1216 12:37:57.612247 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-ca-bundle\") pod \"whisker-dfc95554d-nhcl9\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " pod="calico-system/whisker-dfc95554d-nhcl9" Dec 16 12:37:57.612330 kubelet[2635]: I1216 12:37:57.612313 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk798\" (UniqueName: \"kubernetes.io/projected/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-kube-api-access-kk798\") pod \"whisker-dfc95554d-nhcl9\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " pod="calico-system/whisker-dfc95554d-nhcl9" Dec 16 12:37:57.612486 kubelet[2635]: I1216 12:37:57.612396 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55lr4\" (UniqueName: \"kubernetes.io/projected/7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7-kube-api-access-55lr4\") pod \"calico-kube-controllers-645b4df4cc-zrxc7\" (UID: \"7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7\") " pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" Dec 16 12:37:57.612486 kubelet[2635]: I1216 12:37:57.612448 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9321e5e2-d99b-4289-9374-6508cb7284e1-goldmane-ca-bundle\") pod \"goldmane-666569f655-2h7z8\" (UID: \"9321e5e2-d99b-4289-9374-6508cb7284e1\") " pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:57.612486 kubelet[2635]: I1216 12:37:57.612483 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gtc\" (UniqueName: \"kubernetes.io/projected/12f0abbf-eb53-4a17-a66e-13f45ba59ea7-kube-api-access-h5gtc\") pod \"calico-apiserver-9d47ffd8c-g75vj\" (UID: \"12f0abbf-eb53-4a17-a66e-13f45ba59ea7\") " pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" Dec 16 12:37:57.612549 kubelet[2635]: I1216 12:37:57.612524 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b10b6c17-28bd-4f18-aab7-57c174d58243-config-volume\") pod \"coredns-668d6bf9bc-vrgnm\" (UID: \"b10b6c17-28bd-4f18-aab7-57c174d58243\") " pod="kube-system/coredns-668d6bf9bc-vrgnm" Dec 16 12:37:57.612575 kubelet[2635]: I1216 12:37:57.612552 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9321e5e2-d99b-4289-9374-6508cb7284e1-goldmane-key-pair\") pod \"goldmane-666569f655-2h7z8\" (UID: \"9321e5e2-d99b-4289-9374-6508cb7284e1\") " pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:57.612598 kubelet[2635]: I1216 12:37:57.612577 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9321e5e2-d99b-4289-9374-6508cb7284e1-config\") pod \"goldmane-666569f655-2h7z8\" (UID: \"9321e5e2-d99b-4289-9374-6508cb7284e1\") " pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:57.612598 kubelet[2635]: I1216 12:37:57.612593 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7-tigera-ca-bundle\") pod \"calico-kube-controllers-645b4df4cc-zrxc7\" (UID: \"7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7\") " pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" Dec 16 12:37:57.612645 kubelet[2635]: I1216 12:37:57.612621 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-backend-key-pair\") pod \"whisker-dfc95554d-nhcl9\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " pod="calico-system/whisker-dfc95554d-nhcl9" Dec 16 12:37:57.612668 kubelet[2635]: I1216 12:37:57.612643 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p78g\" (UniqueName: \"kubernetes.io/projected/b10b6c17-28bd-4f18-aab7-57c174d58243-kube-api-access-4p78g\") pod \"coredns-668d6bf9bc-vrgnm\" (UID: \"b10b6c17-28bd-4f18-aab7-57c174d58243\") " pod="kube-system/coredns-668d6bf9bc-vrgnm" Dec 16 12:37:57.612668 kubelet[2635]: I1216 12:37:57.612665 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fzm2\" (UniqueName: \"kubernetes.io/projected/58c1d842-0739-4b32-93b5-118e08e0afe6-kube-api-access-5fzm2\") pod \"calico-apiserver-9d47ffd8c-qf5gh\" (UID: \"58c1d842-0739-4b32-93b5-118e08e0afe6\") " pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" Dec 16 12:37:57.612714 kubelet[2635]: I1216 12:37:57.612683 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12f0abbf-eb53-4a17-a66e-13f45ba59ea7-calico-apiserver-certs\") pod \"calico-apiserver-9d47ffd8c-g75vj\" (UID: \"12f0abbf-eb53-4a17-a66e-13f45ba59ea7\") " pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" Dec 16 12:37:57.612714 kubelet[2635]: I1216 12:37:57.612710 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/656ca576-6f38-4a19-a24e-4f26795374d3-config-volume\") pod \"coredns-668d6bf9bc-rnxrq\" (UID: \"656ca576-6f38-4a19-a24e-4f26795374d3\") " pod="kube-system/coredns-668d6bf9bc-rnxrq" Dec 16 12:37:57.612760 kubelet[2635]: I1216 12:37:57.612728 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvr4f\" (UniqueName: \"kubernetes.io/projected/656ca576-6f38-4a19-a24e-4f26795374d3-kube-api-access-dvr4f\") pod \"coredns-668d6bf9bc-rnxrq\" (UID: \"656ca576-6f38-4a19-a24e-4f26795374d3\") " pod="kube-system/coredns-668d6bf9bc-rnxrq" Dec 16 12:37:57.841069 containerd[1499]: time="2025-12-16T12:37:57.840445781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4df4cc-zrxc7,Uid:7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:57.844847 containerd[1499]: time="2025-12-16T12:37:57.844522913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-g75vj,Uid:12f0abbf-eb53-4a17-a66e-13f45ba59ea7,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:37:57.876593 containerd[1499]: time="2025-12-16T12:37:57.876464378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfc95554d-nhcl9,Uid:05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:57.876747 containerd[1499]: time="2025-12-16T12:37:57.876464418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-qf5gh,Uid:58c1d842-0739-4b32-93b5-118e08e0afe6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:37:57.882942 containerd[1499]: time="2025-12-16T12:37:57.882839726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrgnm,Uid:b10b6c17-28bd-4f18-aab7-57c174d58243,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:57.883501 containerd[1499]: time="2025-12-16T12:37:57.883279504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2h7z8,Uid:9321e5e2-d99b-4289-9374-6508cb7284e1,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:57.887693 containerd[1499]: time="2025-12-16T12:37:57.887141587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnxrq,Uid:656ca576-6f38-4a19-a24e-4f26795374d3,Namespace:kube-system,Attempt:0,}" Dec 16 12:37:57.889738 systemd[1]: Created slice kubepods-besteffort-podd7fa23f6_6f8a_4051_b7ef_6f427f114167.slice - libcontainer container kubepods-besteffort-podd7fa23f6_6f8a_4051_b7ef_6f427f114167.slice. Dec 16 12:37:57.893240 containerd[1499]: time="2025-12-16T12:37:57.893199602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sdm5,Uid:d7fa23f6-6f8a-4051-b7ef-6f427f114167,Namespace:calico-system,Attempt:0,}" Dec 16 12:37:57.993491 containerd[1499]: time="2025-12-16T12:37:57.993387739Z" level=error msg="Failed to destroy network for sandbox \"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.002715 containerd[1499]: time="2025-12-16T12:37:58.002659247Z" level=error msg="Failed to destroy network for sandbox \"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.003915 containerd[1499]: time="2025-12-16T12:37:58.003723010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4df4cc-zrxc7,Uid:7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.004231 kubelet[2635]: E1216 12:37:58.004191 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.004383 kubelet[2635]: E1216 12:37:58.004256 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" Dec 16 12:37:58.004383 kubelet[2635]: E1216 12:37:58.004276 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" Dec 16 12:37:58.006124 kubelet[2635]: E1216 12:37:58.005967 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645b4df4cc-zrxc7_calico-system(7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645b4df4cc-zrxc7_calico-system(7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43e0be5adf0ee51dc932719c8779c37549d5b66ca9a0c4de274f7149112cb49c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:37:58.006291 containerd[1499]: time="2025-12-16T12:37:58.006076826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-g75vj,Uid:12f0abbf-eb53-4a17-a66e-13f45ba59ea7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.006563 kubelet[2635]: E1216 12:37:58.006520 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.007265 kubelet[2635]: E1216 12:37:58.006572 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" Dec 16 12:37:58.007265 kubelet[2635]: E1216 12:37:58.006590 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" Dec 16 12:37:58.007265 kubelet[2635]: E1216 12:37:58.006628 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d47ffd8c-g75vj_calico-apiserver(12f0abbf-eb53-4a17-a66e-13f45ba59ea7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d47ffd8c-g75vj_calico-apiserver(12f0abbf-eb53-4a17-a66e-13f45ba59ea7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26e569734eec453e36f8b7eca6d8438be18f24c0365b9b94dd17d7e033964dfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:37:58.011841 containerd[1499]: time="2025-12-16T12:37:58.011781618Z" level=error msg="Failed to destroy network for sandbox \"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.015246 containerd[1499]: time="2025-12-16T12:37:58.015190637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnxrq,Uid:656ca576-6f38-4a19-a24e-4f26795374d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.015444 kubelet[2635]: E1216 12:37:58.015378 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.015499 kubelet[2635]: E1216 12:37:58.015464 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rnxrq" Dec 16 12:37:58.015499 kubelet[2635]: E1216 12:37:58.015486 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rnxrq" Dec 16 12:37:58.015568 kubelet[2635]: E1216 12:37:58.015522 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rnxrq_kube-system(656ca576-6f38-4a19-a24e-4f26795374d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rnxrq_kube-system(656ca576-6f38-4a19-a24e-4f26795374d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c816bb4d58783e5370c15952309ffaba8b57cf9939054278dfc0d234699d2cf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rnxrq" podUID="656ca576-6f38-4a19-a24e-4f26795374d3" Dec 16 12:37:58.018263 containerd[1499]: time="2025-12-16T12:37:58.018222960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:37:58.022326 containerd[1499]: time="2025-12-16T12:37:58.022267125Z" level=error msg="Failed to destroy network for sandbox \"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.027439 containerd[1499]: time="2025-12-16T12:37:58.027384933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-qf5gh,Uid:58c1d842-0739-4b32-93b5-118e08e0afe6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.028411 kubelet[2635]: E1216 12:37:58.028372 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.028665 kubelet[2635]: E1216 12:37:58.028547 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" Dec 16 12:37:58.028665 kubelet[2635]: E1216 12:37:58.028645 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" Dec 16 12:37:58.028923 kubelet[2635]: E1216 12:37:58.028805 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9d47ffd8c-qf5gh_calico-apiserver(58c1d842-0739-4b32-93b5-118e08e0afe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9d47ffd8c-qf5gh_calico-apiserver(58c1d842-0739-4b32-93b5-118e08e0afe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c9b7fa850c5645367e79a46db2fcb49926113a6f4ec5c7646b72e5cbbbdd371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:37:58.050889 containerd[1499]: time="2025-12-16T12:37:58.050811846Z" level=error msg="Failed to destroy network for sandbox \"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.052544 containerd[1499]: time="2025-12-16T12:37:58.052386270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2h7z8,Uid:9321e5e2-d99b-4289-9374-6508cb7284e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.052705 kubelet[2635]: E1216 12:37:58.052623 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.052705 kubelet[2635]: E1216 12:37:58.052681 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:58.052705 kubelet[2635]: E1216 12:37:58.052700 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2h7z8" Dec 16 12:37:58.052849 kubelet[2635]: E1216 12:37:58.052743 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2h7z8_calico-system(9321e5e2-d99b-4289-9374-6508cb7284e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2h7z8_calico-system(9321e5e2-d99b-4289-9374-6508cb7284e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56056012902cfdb2bd5e4414f459c08234ea72865f1fd9d295054ab8caa3d5ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:37:58.055227 containerd[1499]: time="2025-12-16T12:37:58.055147823Z" level=error msg="Failed to destroy network for sandbox \"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.056485 containerd[1499]: time="2025-12-16T12:37:58.056452316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sdm5,Uid:d7fa23f6-6f8a-4051-b7ef-6f427f114167,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.056915 kubelet[2635]: E1216 12:37:58.056728 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.056915 kubelet[2635]: E1216 12:37:58.056781 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:58.056915 kubelet[2635]: E1216 12:37:58.056805 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8sdm5" Dec 16 12:37:58.057255 kubelet[2635]: E1216 12:37:58.057209 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e225ad2f11b1b3f7115fc1f97f5f2dca8b7314d45aa83fa9841b829746dc3bf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:37:58.057510 containerd[1499]: time="2025-12-16T12:37:58.057476957Z" level=error msg="Failed to destroy network for sandbox \"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.058913 containerd[1499]: time="2025-12-16T12:37:58.058842933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrgnm,Uid:b10b6c17-28bd-4f18-aab7-57c174d58243,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.059158 kubelet[2635]: E1216 12:37:58.059129 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.059286 kubelet[2635]: E1216 12:37:58.059267 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrgnm" Dec 16 12:37:58.059443 kubelet[2635]: E1216 12:37:58.059375 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vrgnm" Dec 16 12:37:58.059508 kubelet[2635]: E1216 12:37:58.059416 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vrgnm_kube-system(b10b6c17-28bd-4f18-aab7-57c174d58243)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vrgnm_kube-system(b10b6c17-28bd-4f18-aab7-57c174d58243)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70b938ddff0020738f509eb2d432b72e4a3e26dd552fd1f4c2e94535e1ccbd80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vrgnm" podUID="b10b6c17-28bd-4f18-aab7-57c174d58243" Dec 16 12:37:58.059903 containerd[1499]: time="2025-12-16T12:37:58.059873815Z" level=error msg="Failed to destroy network for sandbox \"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.062497 containerd[1499]: time="2025-12-16T12:37:58.062454360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dfc95554d-nhcl9,Uid:05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.062869 kubelet[2635]: E1216 12:37:58.062654 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:37:58.062869 kubelet[2635]: E1216 12:37:58.062701 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dfc95554d-nhcl9" Dec 16 12:37:58.062869 kubelet[2635]: E1216 12:37:58.062719 2635 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dfc95554d-nhcl9" Dec 16 12:37:58.062963 kubelet[2635]: E1216 12:37:58.062758 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dfc95554d-nhcl9_calico-system(05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dfc95554d-nhcl9_calico-system(05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"738f525bf840cad724f0827495a3d09a9cf8354f5bdc1c2a74254ee33240cf8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dfc95554d-nhcl9" podUID="05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880" Dec 16 12:38:01.077938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323255314.mount: Deactivated successfully. Dec 16 12:38:01.406008 containerd[1499]: time="2025-12-16T12:38:01.390191659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:38:01.406936 containerd[1499]: time="2025-12-16T12:38:01.405030568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:38:01.426911 containerd[1499]: time="2025-12-16T12:38:01.426843374Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:38:01.429143 containerd[1499]: time="2025-12-16T12:38:01.429099137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:38:01.429919 containerd[1499]: time="2025-12-16T12:38:01.429813924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.411547921s" Dec 16 12:38:01.429919 containerd[1499]: time="2025-12-16T12:38:01.429877166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:38:01.439332 containerd[1499]: time="2025-12-16T12:38:01.439287634Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:38:01.466187 containerd[1499]: time="2025-12-16T12:38:01.464924181Z" level=info msg="Container 2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:38:01.466466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142161961.mount: Deactivated successfully. Dec 16 12:38:01.479641 containerd[1499]: time="2025-12-16T12:38:01.479591284Z" level=info msg="CreateContainer within sandbox \"96335171be0acc406fbe43b032e56a6fe7e1efa3e8a18e05a4262a79995d3fc9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb\"" Dec 16 12:38:01.480150 containerd[1499]: time="2025-12-16T12:38:01.480114063Z" level=info msg="StartContainer for \"2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb\"" Dec 16 12:38:01.481675 containerd[1499]: time="2025-12-16T12:38:01.481633959Z" level=info msg="connecting to shim 2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb" address="unix:///run/containerd/s/1b626b1f8136ce5287e82e2762215c80d213ec29aebd88d423a6095ebcb4e6de" protocol=ttrpc version=3 Dec 16 12:38:01.502054 systemd[1]: Started cri-containerd-2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb.scope - libcontainer container 2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb. Dec 16 12:38:01.607121 containerd[1499]: time="2025-12-16T12:38:01.607063035Z" level=info msg="StartContainer for \"2262d70e058e97f4a0a7b150af285f8aae3821fecbba2043df3670b3b22327eb\" returns successfully" Dec 16 12:38:01.726780 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:38:01.726929 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:38:01.941478 kubelet[2635]: I1216 12:38:01.941430 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk798\" (UniqueName: \"kubernetes.io/projected/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-kube-api-access-kk798\") pod \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " Dec 16 12:38:01.943121 kubelet[2635]: I1216 12:38:01.942294 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-backend-key-pair\") pod \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " Dec 16 12:38:01.943261 kubelet[2635]: I1216 12:38:01.943243 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-ca-bundle\") pod \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\" (UID: \"05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880\") " Dec 16 12:38:01.945088 kubelet[2635]: I1216 12:38:01.945040 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880" (UID: "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:38:01.946976 kubelet[2635]: I1216 12:38:01.946929 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-kube-api-access-kk798" (OuterVolumeSpecName: "kube-api-access-kk798") pod "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880" (UID: "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880"). InnerVolumeSpecName "kube-api-access-kk798". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:38:01.947177 kubelet[2635]: I1216 12:38:01.947141 2635 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880" (UID: "05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:38:02.034176 systemd[1]: Removed slice kubepods-besteffort-pod05b0c0cf_fa4e_4d16_bf34_9f5bf04ca880.slice - libcontainer container kubepods-besteffort-pod05b0c0cf_fa4e_4d16_bf34_9f5bf04ca880.slice. Dec 16 12:38:02.043705 kubelet[2635]: I1216 12:38:02.043635 2635 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kk798\" (UniqueName: \"kubernetes.io/projected/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-kube-api-access-kk798\") on node \"localhost\" DevicePath \"\"" Dec 16 12:38:02.043705 kubelet[2635]: I1216 12:38:02.043672 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:38:02.043705 kubelet[2635]: I1216 12:38:02.043682 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:38:02.057692 kubelet[2635]: I1216 12:38:02.057568 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gkx5n" podStartSLOduration=1.7298577370000001 podStartE2EDuration="13.05741034s" podCreationTimestamp="2025-12-16 12:37:49 +0000 UTC" firstStartedPulling="2025-12-16 12:37:50.102949986 +0000 UTC m=+26.309022653" lastFinishedPulling="2025-12-16 12:38:01.430502589 +0000 UTC m=+37.636575256" observedRunningTime="2025-12-16 12:38:02.046159937 +0000 UTC m=+38.252232644" watchObservedRunningTime="2025-12-16 12:38:02.05741034 +0000 UTC m=+38.263483047" Dec 16 12:38:02.078877 systemd[1]: var-lib-kubelet-pods-05b0c0cf\x2dfa4e\x2d4d16\x2dbf34\x2d9f5bf04ca880-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkk798.mount: Deactivated successfully. Dec 16 12:38:02.081932 systemd[1]: var-lib-kubelet-pods-05b0c0cf\x2dfa4e\x2d4d16\x2dbf34\x2d9f5bf04ca880-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:38:02.100619 systemd[1]: Created slice kubepods-besteffort-pod3f992751_c3f5_48d0_b7ab_cb9e9ad078f7.slice - libcontainer container kubepods-besteffort-pod3f992751_c3f5_48d0_b7ab_cb9e9ad078f7.slice. Dec 16 12:38:02.144012 kubelet[2635]: I1216 12:38:02.143949 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f992751-c3f5-48d0-b7ab-cb9e9ad078f7-whisker-backend-key-pair\") pod \"whisker-74989f6bd6-bfk22\" (UID: \"3f992751-c3f5-48d0-b7ab-cb9e9ad078f7\") " pod="calico-system/whisker-74989f6bd6-bfk22" Dec 16 12:38:02.144012 kubelet[2635]: I1216 12:38:02.144023 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7rd\" (UniqueName: \"kubernetes.io/projected/3f992751-c3f5-48d0-b7ab-cb9e9ad078f7-kube-api-access-pm7rd\") pod \"whisker-74989f6bd6-bfk22\" (UID: \"3f992751-c3f5-48d0-b7ab-cb9e9ad078f7\") " pod="calico-system/whisker-74989f6bd6-bfk22" Dec 16 12:38:02.144206 kubelet[2635]: I1216 12:38:02.144063 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f992751-c3f5-48d0-b7ab-cb9e9ad078f7-whisker-ca-bundle\") pod \"whisker-74989f6bd6-bfk22\" (UID: \"3f992751-c3f5-48d0-b7ab-cb9e9ad078f7\") " pod="calico-system/whisker-74989f6bd6-bfk22" Dec 16 12:38:02.404390 containerd[1499]: time="2025-12-16T12:38:02.404348467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74989f6bd6-bfk22,Uid:3f992751-c3f5-48d0-b7ab-cb9e9ad078f7,Namespace:calico-system,Attempt:0,}" Dec 16 12:38:02.637860 systemd-networkd[1440]: calic24c7512bf7: Link UP Dec 16 12:38:02.638060 systemd-networkd[1440]: calic24c7512bf7: Gained carrier Dec 16 12:38:02.662433 containerd[1499]: 2025-12-16 12:38:02.428 [INFO][3795] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:38:02.662433 containerd[1499]: 2025-12-16 12:38:02.478 [INFO][3795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74989f6bd6--bfk22-eth0 whisker-74989f6bd6- calico-system 3f992751-c3f5-48d0-b7ab-cb9e9ad078f7 857 0 2025-12-16 12:38:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74989f6bd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74989f6bd6-bfk22 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic24c7512bf7 [] [] }} ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-" Dec 16 12:38:02.662433 containerd[1499]: 2025-12-16 12:38:02.478 [INFO][3795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.662433 containerd[1499]: 2025-12-16 12:38:02.551 [INFO][3810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" HandleID="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Workload="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.551 [INFO][3810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" HandleID="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Workload="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000219b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74989f6bd6-bfk22", "timestamp":"2025-12-16 12:38:02.55163083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.552 [INFO][3810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.552 [INFO][3810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.552 [INFO][3810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.563 [INFO][3810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" host="localhost" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.570 [INFO][3810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.575 [INFO][3810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.577 [INFO][3810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.580 [INFO][3810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:02.663021 containerd[1499]: 2025-12-16 12:38:02.580 [INFO][3810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" host="localhost" Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.582 [INFO][3810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882 Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.594 [INFO][3810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" host="localhost" Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.627 [INFO][3810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" host="localhost" Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.627 [INFO][3810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" host="localhost" Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.627 [INFO][3810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:02.663299 containerd[1499]: 2025-12-16 12:38:02.627 [INFO][3810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" HandleID="k8s-pod-network.985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Workload="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.663446 containerd[1499]: 2025-12-16 12:38:02.629 [INFO][3795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74989f6bd6--bfk22-eth0", GenerateName:"whisker-74989f6bd6-", Namespace:"calico-system", SelfLink:"", UID:"3f992751-c3f5-48d0-b7ab-cb9e9ad078f7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 38, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74989f6bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74989f6bd6-bfk22", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic24c7512bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:02.663446 containerd[1499]: 2025-12-16 12:38:02.630 [INFO][3795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.663535 containerd[1499]: 2025-12-16 12:38:02.630 [INFO][3795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic24c7512bf7 ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.663535 containerd[1499]: 2025-12-16 12:38:02.640 [INFO][3795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.663574 containerd[1499]: 2025-12-16 12:38:02.641 [INFO][3795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74989f6bd6--bfk22-eth0", GenerateName:"whisker-74989f6bd6-", Namespace:"calico-system", SelfLink:"", UID:"3f992751-c3f5-48d0-b7ab-cb9e9ad078f7", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 38, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74989f6bd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882", Pod:"whisker-74989f6bd6-bfk22", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic24c7512bf7", MAC:"16:49:84:ff:1a:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:02.663636 containerd[1499]: 2025-12-16 12:38:02.660 [INFO][3795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" Namespace="calico-system" Pod="whisker-74989f6bd6-bfk22" WorkloadEndpoint="localhost-k8s-whisker--74989f6bd6--bfk22-eth0" Dec 16 12:38:02.710133 containerd[1499]: time="2025-12-16T12:38:02.710085875Z" level=info msg="connecting to shim 985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882" address="unix:///run/containerd/s/aa0beb098412ea7b2a080e087a48a463320758c22b4d16216b1e38f5ce789c7e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:02.731041 systemd[1]: Started cri-containerd-985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882.scope - libcontainer container 985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882. Dec 16 12:38:02.743620 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:02.765803 containerd[1499]: time="2025-12-16T12:38:02.765753592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74989f6bd6-bfk22,Uid:3f992751-c3f5-48d0-b7ab-cb9e9ad078f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"985310799270e712b9627ca8d25ea58df9c3a6d2f2c1ed44651c397573612882\"" Dec 16 12:38:02.768115 containerd[1499]: time="2025-12-16T12:38:02.768076795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:38:02.995569 containerd[1499]: time="2025-12-16T12:38:02.995442312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:02.998472 containerd[1499]: time="2025-12-16T12:38:02.998410498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:38:02.998472 containerd[1499]: time="2025-12-16T12:38:02.998464180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:38:02.998932 kubelet[2635]: E1216 12:38:02.998751 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:02.998932 kubelet[2635]: E1216 12:38:02.998800 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:03.001518 kubelet[2635]: E1216 12:38:03.001448 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:907bd910ce864eb68ce15c1c07b8d8bb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:03.003997 containerd[1499]: time="2025-12-16T12:38:03.003960855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:38:03.029790 kubelet[2635]: I1216 12:38:03.029761 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:38:03.226677 containerd[1499]: time="2025-12-16T12:38:03.226622216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:03.227667 containerd[1499]: time="2025-12-16T12:38:03.227610490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:38:03.227727 containerd[1499]: time="2025-12-16T12:38:03.227680173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:38:03.228131 kubelet[2635]: E1216 12:38:03.227873 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:03.228131 kubelet[2635]: E1216 12:38:03.227926 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:03.228297 kubelet[2635]: E1216 12:38:03.228041 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:03.229281 kubelet[2635]: E1216 12:38:03.229224 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74989f6bd6-bfk22" podUID="3f992751-c3f5-48d0-b7ab-cb9e9ad078f7" Dec 16 12:38:03.884352 kubelet[2635]: I1216 12:38:03.884127 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880" path="/var/lib/kubelet/pods/05b0c0cf-fa4e-4d16-bf34-9f5bf04ca880/volumes" Dec 16 12:38:04.032652 kubelet[2635]: E1216 12:38:04.032604 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74989f6bd6-bfk22" podUID="3f992751-c3f5-48d0-b7ab-cb9e9ad078f7" Dec 16 12:38:04.388234 systemd-networkd[1440]: calic24c7512bf7: Gained IPv6LL Dec 16 12:38:07.524996 kubelet[2635]: I1216 12:38:07.524793 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:38:07.604799 kubelet[2635]: I1216 12:38:07.604745 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:38:08.701086 systemd-networkd[1440]: vxlan.calico: Link UP Dec 16 12:38:08.701093 systemd-networkd[1440]: vxlan.calico: Gained carrier Dec 16 12:38:08.889666 containerd[1499]: time="2025-12-16T12:38:08.889621016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-qf5gh,Uid:58c1d842-0739-4b32-93b5-118e08e0afe6,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:38:08.891196 containerd[1499]: time="2025-12-16T12:38:08.890992418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnxrq,Uid:656ca576-6f38-4a19-a24e-4f26795374d3,Namespace:kube-system,Attempt:0,}" Dec 16 12:38:09.148138 systemd-networkd[1440]: calib71b4e4f7ef: Link UP Dec 16 12:38:09.149729 systemd-networkd[1440]: calib71b4e4f7ef: Gained carrier Dec 16 12:38:09.171967 containerd[1499]: 2025-12-16 12:38:09.076 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0 coredns-668d6bf9bc- kube-system 656ca576-6f38-4a19-a24e-4f26795374d3 801 0 2025-12-16 12:37:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rnxrq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib71b4e4f7ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-" Dec 16 12:38:09.171967 containerd[1499]: 2025-12-16 12:38:09.076 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.171967 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4305] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" HandleID="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Workload="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4305] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" HandleID="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Workload="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0820), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rnxrq", "timestamp":"2025-12-16 12:38:09.101508996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.112 [INFO][4305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" host="localhost" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.118 [INFO][4305] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.122 [INFO][4305] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.124 [INFO][4305] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.127 [INFO][4305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:09.172206 containerd[1499]: 2025-12-16 12:38:09.127 [INFO][4305] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" host="localhost" Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.128 [INFO][4305] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459 Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.133 [INFO][4305] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" host="localhost" Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.140 [INFO][4305] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" host="localhost" Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.140 [INFO][4305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" host="localhost" Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.140 [INFO][4305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:09.172463 containerd[1499]: 2025-12-16 12:38:09.140 [INFO][4305] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" HandleID="k8s-pod-network.667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Workload="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.172621 containerd[1499]: 2025-12-16 12:38:09.144 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"656ca576-6f38-4a19-a24e-4f26795374d3", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rnxrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71b4e4f7ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:09.172681 containerd[1499]: 2025-12-16 12:38:09.145 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.172681 containerd[1499]: 2025-12-16 12:38:09.145 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib71b4e4f7ef ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.172681 containerd[1499]: 2025-12-16 12:38:09.149 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.172746 containerd[1499]: 2025-12-16 12:38:09.150 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"656ca576-6f38-4a19-a24e-4f26795374d3", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459", Pod:"coredns-668d6bf9bc-rnxrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71b4e4f7ef", MAC:"e2:9f:84:a4:0e:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:09.172746 containerd[1499]: 2025-12-16 12:38:09.164 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" Namespace="kube-system" Pod="coredns-668d6bf9bc-rnxrq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rnxrq-eth0" Dec 16 12:38:09.211129 containerd[1499]: time="2025-12-16T12:38:09.211086037Z" level=info msg="connecting to shim 667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459" address="unix:///run/containerd/s/02aea190b840f43d39db797bb398b65fa0ce4ff88befe5fd6db760545e5fb7ed" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:09.245715 systemd[1]: Started cri-containerd-667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459.scope - libcontainer container 667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459. Dec 16 12:38:09.257715 systemd-networkd[1440]: cali33004b7842a: Link UP Dec 16 12:38:09.258193 systemd-networkd[1440]: cali33004b7842a: Gained carrier Dec 16 12:38:09.263184 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.073 [INFO][4271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0 calico-apiserver-9d47ffd8c- calico-apiserver 58c1d842-0739-4b32-93b5-118e08e0afe6 792 0 2025-12-16 12:37:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d47ffd8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d47ffd8c-qf5gh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33004b7842a [] [] }} ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.074 [INFO][4271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.101 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" HandleID="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.102 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" HandleID="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9d47ffd8c-qf5gh", "timestamp":"2025-12-16 12:38:09.101871487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.102 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.140 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.141 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.213 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.224 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.230 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.233 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.237 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.237 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.239 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.244 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.252 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.253 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" host="localhost" Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.253 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:09.280433 containerd[1499]: 2025-12-16 12:38:09.253 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" HandleID="k8s-pod-network.17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.255 [INFO][4271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0", GenerateName:"calico-apiserver-9d47ffd8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c1d842-0739-4b32-93b5-118e08e0afe6", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d47ffd8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d47ffd8c-qf5gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33004b7842a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.256 [INFO][4271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.256 [INFO][4271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33004b7842a ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.258 [INFO][4271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.259 [INFO][4271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0", GenerateName:"calico-apiserver-9d47ffd8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"58c1d842-0739-4b32-93b5-118e08e0afe6", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d47ffd8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f", Pod:"calico-apiserver-9d47ffd8c-qf5gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33004b7842a", MAC:"aa:1a:5a:66:16:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:09.281328 containerd[1499]: 2025-12-16 12:38:09.274 [INFO][4271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-qf5gh" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--qf5gh-eth0" Dec 16 12:38:09.294592 containerd[1499]: time="2025-12-16T12:38:09.294477173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnxrq,Uid:656ca576-6f38-4a19-a24e-4f26795374d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459\"" Dec 16 12:38:09.309449 containerd[1499]: time="2025-12-16T12:38:09.309406780Z" level=info msg="CreateContainer within sandbox \"667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:38:09.321738 containerd[1499]: time="2025-12-16T12:38:09.321693268Z" level=info msg="Container 1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:38:09.324887 containerd[1499]: time="2025-12-16T12:38:09.324846842Z" level=info msg="connecting to shim 17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f" address="unix:///run/containerd/s/e987dcceb440abfc76e9c5175a26d66ac231b0b177f0ea0cb034c17277c47ff3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:09.342722 containerd[1499]: time="2025-12-16T12:38:09.342634055Z" level=info msg="CreateContainer within sandbox \"667cb44fb997b8db7f416cc985e69a79521f7920a735b987265468223a26f459\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff\"" Dec 16 12:38:09.343274 containerd[1499]: time="2025-12-16T12:38:09.343138310Z" level=info msg="StartContainer for \"1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff\"" Dec 16 12:38:09.344174 containerd[1499]: time="2025-12-16T12:38:09.344150420Z" level=info msg="connecting to shim 1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff" address="unix:///run/containerd/s/02aea190b840f43d39db797bb398b65fa0ce4ff88befe5fd6db760545e5fb7ed" protocol=ttrpc version=3 Dec 16 12:38:09.347063 systemd[1]: Started cri-containerd-17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f.scope - libcontainer container 17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f. Dec 16 12:38:09.361682 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:09.369029 systemd[1]: Started cri-containerd-1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff.scope - libcontainer container 1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff. Dec 16 12:38:09.398219 containerd[1499]: time="2025-12-16T12:38:09.398149477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-qf5gh,Uid:58c1d842-0739-4b32-93b5-118e08e0afe6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"17d17df758aeb5912f5851bf9f2b6b7b861220350389cf140bbdcb9eb6f7f24f\"" Dec 16 12:38:09.405449 containerd[1499]: time="2025-12-16T12:38:09.405269170Z" level=info msg="StartContainer for \"1b3a9e3fdad86cf1ee244dd401c02bcaebb9248db6ff047294a66b2163e958ff\" returns successfully" Dec 16 12:38:09.406578 containerd[1499]: time="2025-12-16T12:38:09.406496847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:38:09.633574 containerd[1499]: time="2025-12-16T12:38:09.633505043Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:09.638219 containerd[1499]: time="2025-12-16T12:38:09.638175463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:38:09.638305 containerd[1499]: time="2025-12-16T12:38:09.638258505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:09.638702 kubelet[2635]: E1216 12:38:09.638443 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:09.638702 kubelet[2635]: E1216 12:38:09.638495 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:09.638702 kubelet[2635]: E1216 12:38:09.638647 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fzm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d47ffd8c-qf5gh_calico-apiserver(58c1d842-0739-4b32-93b5-118e08e0afe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:09.639933 kubelet[2635]: E1216 12:38:09.639884 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:38:09.883346 containerd[1499]: time="2025-12-16T12:38:09.883293881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrgnm,Uid:b10b6c17-28bd-4f18-aab7-57c174d58243,Namespace:kube-system,Attempt:0,}" Dec 16 12:38:10.033633 systemd-networkd[1440]: cali961a57f4d35: Link UP Dec 16 12:38:10.033848 systemd-networkd[1440]: cali961a57f4d35: Gained carrier Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.923 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0 coredns-668d6bf9bc- kube-system b10b6c17-28bd-4f18-aab7-57c174d58243 800 0 2025-12-16 12:37:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-vrgnm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali961a57f4d35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.923 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.958 [INFO][4479] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" HandleID="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Workload="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.958 [INFO][4479] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" HandleID="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Workload="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d26f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-vrgnm", "timestamp":"2025-12-16 12:38:09.95806596 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.958 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.958 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.958 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.977 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.983 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.987 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.989 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.992 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.992 [INFO][4479] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:09.993 [INFO][4479] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9 Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:10.015 [INFO][4479] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:10.028 [INFO][4479] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:10.028 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" host="localhost" Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:10.028 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:10.063702 containerd[1499]: 2025-12-16 12:38:10.028 [INFO][4479] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" HandleID="k8s-pod-network.c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Workload="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.030 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b10b6c17-28bd-4f18-aab7-57c174d58243", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-vrgnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali961a57f4d35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.030 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.030 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali961a57f4d35 ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.034 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.034 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b10b6c17-28bd-4f18-aab7-57c174d58243", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9", Pod:"coredns-668d6bf9bc-vrgnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali961a57f4d35", MAC:"8e:3e:ad:56:60:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:10.065606 containerd[1499]: 2025-12-16 12:38:10.058 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" Namespace="kube-system" Pod="coredns-668d6bf9bc-vrgnm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--vrgnm-eth0" Dec 16 12:38:10.065810 kubelet[2635]: E1216 12:38:10.064079 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:38:10.114591 containerd[1499]: time="2025-12-16T12:38:10.114519330Z" level=info msg="connecting to shim c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9" address="unix:///run/containerd/s/282d527f9a9956c5da351ba2ec71f3b183bcb38910e897bccd8cb7f2916fd65a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:10.137008 systemd[1]: Started cri-containerd-c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9.scope - libcontainer container c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9. Dec 16 12:38:10.148026 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:10.175288 containerd[1499]: time="2025-12-16T12:38:10.175213748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrgnm,Uid:b10b6c17-28bd-4f18-aab7-57c174d58243,Namespace:kube-system,Attempt:0,} returns sandbox id \"c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9\"" Dec 16 12:38:10.178807 containerd[1499]: time="2025-12-16T12:38:10.178769172Z" level=info msg="CreateContainer within sandbox \"c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:38:10.192656 containerd[1499]: time="2025-12-16T12:38:10.192599617Z" level=info msg="Container cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:38:10.207320 containerd[1499]: time="2025-12-16T12:38:10.206909276Z" level=info msg="CreateContainer within sandbox \"c72083b8e275ccd94385b6db538cadde135fa1062fb52c6b0ce998015637e5c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953\"" Dec 16 12:38:10.208551 containerd[1499]: time="2025-12-16T12:38:10.208510523Z" level=info msg="StartContainer for \"cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953\"" Dec 16 12:38:10.209738 containerd[1499]: time="2025-12-16T12:38:10.209690317Z" level=info msg="connecting to shim cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953" address="unix:///run/containerd/s/282d527f9a9956c5da351ba2ec71f3b183bcb38910e897bccd8cb7f2916fd65a" protocol=ttrpc version=3 Dec 16 12:38:10.239008 systemd[1]: Started cri-containerd-cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953.scope - libcontainer container cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953. Dec 16 12:38:10.278084 containerd[1499]: time="2025-12-16T12:38:10.278041799Z" level=info msg="StartContainer for \"cb141144a0007178ff530d351af3c112f249b740c0776613f06c3d93a526a953\" returns successfully" Dec 16 12:38:10.467950 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Dec 16 12:38:10.532994 systemd-networkd[1440]: cali33004b7842a: Gained IPv6LL Dec 16 12:38:10.880681 containerd[1499]: time="2025-12-16T12:38:10.880634168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2h7z8,Uid:9321e5e2-d99b-4289-9374-6508cb7284e1,Namespace:calico-system,Attempt:0,}" Dec 16 12:38:11.017190 systemd[1]: Started sshd@7-10.0.0.86:22-10.0.0.1:47550.service - OpenSSH per-connection server daemon (10.0.0.1:47550). Dec 16 12:38:11.028183 systemd-networkd[1440]: cali0772a8720bb: Link UP Dec 16 12:38:11.028321 systemd-networkd[1440]: cali0772a8720bb: Gained carrier Dec 16 12:38:11.043753 kubelet[2635]: I1216 12:38:11.043663 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rnxrq" podStartSLOduration=41.043548354 podStartE2EDuration="41.043548354s" podCreationTimestamp="2025-12-16 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:38:10.104120185 +0000 UTC m=+46.310192932" watchObservedRunningTime="2025-12-16 12:38:11.043548354 +0000 UTC m=+47.249621061" Dec 16 12:38:11.044930 systemd-networkd[1440]: calib71b4e4f7ef: Gained IPv6LL Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.922 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--2h7z8-eth0 goldmane-666569f655- calico-system 9321e5e2-d99b-4289-9374-6508cb7284e1 798 0 2025-12-16 12:37:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-2h7z8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0772a8720bb [] [] }} ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.922 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.952 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" HandleID="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Workload="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.952 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" HandleID="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Workload="localhost-k8s-goldmane--666569f655--2h7z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-2h7z8", "timestamp":"2025-12-16 12:38:10.952458592 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.952 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.952 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.952 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.962 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.967 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.979 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.982 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.992 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:10.992 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.002 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2 Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.008 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.020 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.020 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" host="localhost" Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.020 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:11.050184 containerd[1499]: 2025-12-16 12:38:11.020 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" HandleID="k8s-pod-network.7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Workload="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.024 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2h7z8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9321e5e2-d99b-4289-9374-6508cb7284e1", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-2h7z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0772a8720bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.024 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.024 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0772a8720bb ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.026 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.026 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2h7z8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9321e5e2-d99b-4289-9374-6508cb7284e1", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2", Pod:"goldmane-666569f655-2h7z8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0772a8720bb", MAC:"52:1f:4e:4e:a2:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:11.050751 containerd[1499]: 2025-12-16 12:38:11.043 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" Namespace="calico-system" Pod="goldmane-666569f655-2h7z8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2h7z8-eth0" Dec 16 12:38:11.082330 kubelet[2635]: E1216 12:38:11.082267 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:38:11.097842 sshd[4609]: Accepted publickey for core from 10.0.0.1 port 47550 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:11.101027 sshd-session[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:11.108339 systemd-logind[1482]: New session 8 of user core. Dec 16 12:38:11.113379 containerd[1499]: time="2025-12-16T12:38:11.113300234Z" level=info msg="connecting to shim 7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2" address="unix:///run/containerd/s/d801210e2a47050f65dc616a5ae5b6321c09d0b7033891d7fd3c8d0715033dae" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:11.115179 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:38:11.128398 kubelet[2635]: I1216 12:38:11.127286 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vrgnm" podStartSLOduration=41.127265875 podStartE2EDuration="41.127265875s" podCreationTimestamp="2025-12-16 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:38:11.112993105 +0000 UTC m=+47.319065812" watchObservedRunningTime="2025-12-16 12:38:11.127265875 +0000 UTC m=+47.333338582" Dec 16 12:38:11.155038 systemd[1]: Started cri-containerd-7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2.scope - libcontainer container 7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2. Dec 16 12:38:11.175151 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:11.208358 containerd[1499]: time="2025-12-16T12:38:11.208300399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2h7z8,Uid:9321e5e2-d99b-4289-9374-6508cb7284e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a4c977beac14a0d519e488e332c818311bb1678545707d82e08edeb9773cef2\"" Dec 16 12:38:11.210945 containerd[1499]: time="2025-12-16T12:38:11.210908153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:38:11.313849 sshd[4640]: Connection closed by 10.0.0.1 port 47550 Dec 16 12:38:11.312972 sshd-session[4609]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:11.316935 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:38:11.317337 systemd[1]: sshd@7-10.0.0.86:22-10.0.0.1:47550.service: Deactivated successfully. Dec 16 12:38:11.320491 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:38:11.321943 systemd-logind[1482]: Removed session 8. Dec 16 12:38:11.419651 containerd[1499]: time="2025-12-16T12:38:11.419520216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:11.421111 containerd[1499]: time="2025-12-16T12:38:11.421067541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:38:11.421336 containerd[1499]: time="2025-12-16T12:38:11.421157183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:11.421371 kubelet[2635]: E1216 12:38:11.421300 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:38:11.421371 kubelet[2635]: E1216 12:38:11.421348 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:38:11.421540 kubelet[2635]: E1216 12:38:11.421486 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jxxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2h7z8_calico-system(9321e5e2-d99b-4289-9374-6508cb7284e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:11.422924 kubelet[2635]: E1216 12:38:11.422876 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:38:11.748007 systemd-networkd[1440]: cali961a57f4d35: Gained IPv6LL Dec 16 12:38:11.881533 containerd[1499]: time="2025-12-16T12:38:11.881490905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-g75vj,Uid:12f0abbf-eb53-4a17-a66e-13f45ba59ea7,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:38:12.017357 systemd-networkd[1440]: cali0c32cfa4cf1: Link UP Dec 16 12:38:12.017871 systemd-networkd[1440]: cali0c32cfa4cf1: Gained carrier Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.922 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0 calico-apiserver-9d47ffd8c- calico-apiserver 12f0abbf-eb53-4a17-a66e-13f45ba59ea7 799 0 2025-12-16 12:37:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9d47ffd8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9d47ffd8c-g75vj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c32cfa4cf1 [] [] }} ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.923 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.948 [INFO][4699] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" HandleID="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.948 [INFO][4699] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" HandleID="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9d47ffd8c-g75vj", "timestamp":"2025-12-16 12:38:11.948333142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.948 [INFO][4699] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.948 [INFO][4699] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.948 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.958 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.976 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.983 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.986 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.990 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.990 [INFO][4699] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.993 [INFO][4699] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6 Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:11.999 [INFO][4699] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:12.008 [INFO][4699] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:12.008 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" host="localhost" Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:12.009 [INFO][4699] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:12.034274 containerd[1499]: 2025-12-16 12:38:12.009 [INFO][4699] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" HandleID="k8s-pod-network.73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Workload="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.014 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0", GenerateName:"calico-apiserver-9d47ffd8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12f0abbf-eb53-4a17-a66e-13f45ba59ea7", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d47ffd8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9d47ffd8c-g75vj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c32cfa4cf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.014 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.014 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c32cfa4cf1 ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.016 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.018 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0", GenerateName:"calico-apiserver-9d47ffd8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12f0abbf-eb53-4a17-a66e-13f45ba59ea7", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9d47ffd8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6", Pod:"calico-apiserver-9d47ffd8c-g75vj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c32cfa4cf1", MAC:"0a:5f:e1:d2:22:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:12.034940 containerd[1499]: 2025-12-16 12:38:12.029 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" Namespace="calico-apiserver" Pod="calico-apiserver-9d47ffd8c-g75vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--9d47ffd8c--g75vj-eth0" Dec 16 12:38:12.067693 containerd[1499]: time="2025-12-16T12:38:12.067616045Z" level=info msg="connecting to shim 73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6" address="unix:///run/containerd/s/c47dc650f1331eaebf4729deab478ca33776494805e0115542431fad151337db" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:12.092705 kubelet[2635]: E1216 12:38:12.092221 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:38:12.105081 systemd[1]: Started cri-containerd-73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6.scope - libcontainer container 73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6. Dec 16 12:38:12.122519 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:12.158516 containerd[1499]: time="2025-12-16T12:38:12.158426678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9d47ffd8c-g75vj,Uid:12f0abbf-eb53-4a17-a66e-13f45ba59ea7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"73c5ba9b5b94fc11b229b79d4e46195fc208ae3de9f5d8b8d7e4462f5d9befe6\"" Dec 16 12:38:12.161857 containerd[1499]: time="2025-12-16T12:38:12.160494056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:38:12.371550 containerd[1499]: time="2025-12-16T12:38:12.371487746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:12.372622 containerd[1499]: time="2025-12-16T12:38:12.372564137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:38:12.372669 containerd[1499]: time="2025-12-16T12:38:12.372618778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:12.372874 kubelet[2635]: E1216 12:38:12.372813 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:12.372935 kubelet[2635]: E1216 12:38:12.372887 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:12.373046 kubelet[2635]: E1216 12:38:12.373007 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5gtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d47ffd8c-g75vj_calico-apiserver(12f0abbf-eb53-4a17-a66e-13f45ba59ea7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:12.374365 kubelet[2635]: E1216 12:38:12.374304 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:38:12.881878 containerd[1499]: time="2025-12-16T12:38:12.881793690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sdm5,Uid:d7fa23f6-6f8a-4051-b7ef-6f427f114167,Namespace:calico-system,Attempt:0,}" Dec 16 12:38:12.900998 systemd-networkd[1440]: cali0772a8720bb: Gained IPv6LL Dec 16 12:38:13.017884 systemd-networkd[1440]: calif7d4d5a6d36: Link UP Dec 16 12:38:13.019092 systemd-networkd[1440]: calif7d4d5a6d36: Gained carrier Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.926 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8sdm5-eth0 csi-node-driver- calico-system d7fa23f6-6f8a-4051-b7ef-6f427f114167 692 0 2025-12-16 12:37:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8sdm5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif7d4d5a6d36 [] [] }} ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.926 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.959 [INFO][4781] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" HandleID="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Workload="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.959 [INFO][4781] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" HandleID="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Workload="localhost-k8s-csi--node--driver--8sdm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136fb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8sdm5", "timestamp":"2025-12-16 12:38:12.959654679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.959 [INFO][4781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.959 [INFO][4781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.959 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.971 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.979 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.988 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.991 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.994 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.994 [INFO][4781] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:12.996 [INFO][4781] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:13.002 [INFO][4781] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:13.011 [INFO][4781] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:13.011 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" host="localhost" Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:13.011 [INFO][4781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:13.037406 containerd[1499]: 2025-12-16 12:38:13.011 [INFO][4781] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" HandleID="k8s-pod-network.65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Workload="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.013 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8sdm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7fa23f6-6f8a-4051-b7ef-6f427f114167", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8sdm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7d4d5a6d36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.014 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.014 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7d4d5a6d36 ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.018 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.018 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8sdm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7fa23f6-6f8a-4051-b7ef-6f427f114167", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f", Pod:"csi-node-driver-8sdm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif7d4d5a6d36", MAC:"4a:f4:b4:a0:81:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:13.037963 containerd[1499]: 2025-12-16 12:38:13.033 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" Namespace="calico-system" Pod="csi-node-driver-8sdm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--8sdm5-eth0" Dec 16 12:38:13.066158 containerd[1499]: time="2025-12-16T12:38:13.065712705Z" level=info msg="connecting to shim 65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f" address="unix:///run/containerd/s/20131d906ba995a4cd4df7941159838d32598a3d97d8996cc5e2d27571c99f89" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:13.095221 kubelet[2635]: E1216 12:38:13.095178 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:38:13.095759 kubelet[2635]: E1216 12:38:13.095441 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:38:13.098408 systemd[1]: Started cri-containerd-65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f.scope - libcontainer container 65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f. Dec 16 12:38:13.127011 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:13.145780 containerd[1499]: time="2025-12-16T12:38:13.145539266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8sdm5,Uid:d7fa23f6-6f8a-4051-b7ef-6f427f114167,Namespace:calico-system,Attempt:0,} returns sandbox id \"65edb125df235b729083c835ec572d56a5aa53c239087c990ae39c9cc9cd169f\"" Dec 16 12:38:13.148492 containerd[1499]: time="2025-12-16T12:38:13.148450626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:38:13.365412 containerd[1499]: time="2025-12-16T12:38:13.365351447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:13.366781 containerd[1499]: time="2025-12-16T12:38:13.366743445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:38:13.366861 containerd[1499]: time="2025-12-16T12:38:13.366766886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:38:13.367104 kubelet[2635]: E1216 12:38:13.367028 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:38:13.367172 kubelet[2635]: E1216 12:38:13.367105 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:38:13.367270 kubelet[2635]: E1216 12:38:13.367234 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fthcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:13.370015 containerd[1499]: time="2025-12-16T12:38:13.369815890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:38:13.590532 containerd[1499]: time="2025-12-16T12:38:13.590392132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:13.592611 containerd[1499]: time="2025-12-16T12:38:13.592550231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:38:13.592792 containerd[1499]: time="2025-12-16T12:38:13.592664395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:38:13.592941 kubelet[2635]: E1216 12:38:13.592879 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:38:13.593018 kubelet[2635]: E1216 12:38:13.592956 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:38:13.593296 kubelet[2635]: E1216 12:38:13.593213 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fthcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:13.594588 kubelet[2635]: E1216 12:38:13.594458 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:38:13.860948 systemd-networkd[1440]: cali0c32cfa4cf1: Gained IPv6LL Dec 16 12:38:13.882164 containerd[1499]: time="2025-12-16T12:38:13.881224951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4df4cc-zrxc7,Uid:7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7,Namespace:calico-system,Attempt:0,}" Dec 16 12:38:14.023268 systemd-networkd[1440]: cali2484cb96657: Link UP Dec 16 12:38:14.025588 systemd-networkd[1440]: cali2484cb96657: Gained carrier Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.927 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0 calico-kube-controllers-645b4df4cc- calico-system 7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7 789 0 2025-12-16 12:37:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645b4df4cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-645b4df4cc-zrxc7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2484cb96657 [] [] }} ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.927 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.967 [INFO][4862] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" HandleID="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Workload="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.967 [INFO][4862] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" HandleID="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Workload="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012e510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-645b4df4cc-zrxc7", "timestamp":"2025-12-16 12:38:13.967592532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.967 [INFO][4862] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.968 [INFO][4862] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.968 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.982 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.987 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.993 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.995 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.997 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:13.997 [INFO][4862] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.001 [INFO][4862] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695 Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.004 [INFO][4862] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.015 [INFO][4862] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.015 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" host="localhost" Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.016 [INFO][4862] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:38:14.039753 containerd[1499]: 2025-12-16 12:38:14.016 [INFO][4862] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" HandleID="k8s-pod-network.57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Workload="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.021 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0", GenerateName:"calico-kube-controllers-645b4df4cc-", Namespace:"calico-system", SelfLink:"", UID:"7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645b4df4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-645b4df4cc-zrxc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2484cb96657", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.021 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.021 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2484cb96657 ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.027 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.027 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0", GenerateName:"calico-kube-controllers-645b4df4cc-", Namespace:"calico-system", SelfLink:"", UID:"7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645b4df4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695", Pod:"calico-kube-controllers-645b4df4cc-zrxc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2484cb96657", MAC:"ce:f9:bf:1a:ba:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:38:14.040623 containerd[1499]: 2025-12-16 12:38:14.037 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" Namespace="calico-system" Pod="calico-kube-controllers-645b4df4cc-zrxc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--645b4df4cc--zrxc7-eth0" Dec 16 12:38:14.071956 containerd[1499]: time="2025-12-16T12:38:14.071908213Z" level=info msg="connecting to shim 57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695" address="unix:///run/containerd/s/e3f2bcb8e55f41a90dde39ee22c0cade35e56b5b941cb0232471e6342a484737" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:38:14.100597 kubelet[2635]: E1216 12:38:14.100542 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:38:14.101728 kubelet[2635]: E1216 12:38:14.101653 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:38:14.109086 systemd[1]: Started cri-containerd-57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695.scope - libcontainer container 57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695. Dec 16 12:38:14.140939 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:38:14.162467 containerd[1499]: time="2025-12-16T12:38:14.162402943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4df4cc-zrxc7,Uid:7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7,Namespace:calico-system,Attempt:0,} returns sandbox id \"57277f0a5fa5ece5caa3b2ea79b1a96d3620631d7106261eaae4f423d5957695\"" Dec 16 12:38:14.164425 containerd[1499]: time="2025-12-16T12:38:14.164368196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:38:14.180048 systemd-networkd[1440]: calif7d4d5a6d36: Gained IPv6LL Dec 16 12:38:14.378093 containerd[1499]: time="2025-12-16T12:38:14.378050181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:14.379159 containerd[1499]: time="2025-12-16T12:38:14.379036607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:38:14.379159 containerd[1499]: time="2025-12-16T12:38:14.379104409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:38:14.379739 kubelet[2635]: E1216 12:38:14.379696 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:38:14.379803 kubelet[2635]: E1216 12:38:14.379752 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:38:14.380193 kubelet[2635]: E1216 12:38:14.379934 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55lr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4df4cc-zrxc7_calico-system(7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:14.381159 kubelet[2635]: E1216 12:38:14.381126 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:38:14.881577 containerd[1499]: time="2025-12-16T12:38:14.881128359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:38:15.092701 containerd[1499]: time="2025-12-16T12:38:15.092636362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:15.093673 containerd[1499]: time="2025-12-16T12:38:15.093625828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:38:15.093726 containerd[1499]: time="2025-12-16T12:38:15.093712231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:38:15.094101 kubelet[2635]: E1216 12:38:15.093882 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:15.094101 kubelet[2635]: E1216 12:38:15.093938 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:15.094101 kubelet[2635]: E1216 12:38:15.094047 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:907bd910ce864eb68ce15c1c07b8d8bb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:15.096378 containerd[1499]: time="2025-12-16T12:38:15.096336900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:38:15.102604 kubelet[2635]: E1216 12:38:15.102553 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:38:15.103097 kubelet[2635]: E1216 12:38:15.102923 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:38:15.311599 containerd[1499]: time="2025-12-16T12:38:15.311368700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:15.312951 containerd[1499]: time="2025-12-16T12:38:15.312873140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:38:15.313021 containerd[1499]: time="2025-12-16T12:38:15.312946382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:38:15.313212 kubelet[2635]: E1216 12:38:15.313172 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:15.313687 kubelet[2635]: E1216 12:38:15.313656 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:15.314034 kubelet[2635]: E1216 12:38:15.313926 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:15.315225 kubelet[2635]: E1216 12:38:15.315176 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74989f6bd6-bfk22" podUID="3f992751-c3f5-48d0-b7ab-cb9e9ad078f7" Dec 16 12:38:15.716150 systemd-networkd[1440]: cali2484cb96657: Gained IPv6LL Dec 16 12:38:16.105154 kubelet[2635]: E1216 12:38:16.105109 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:38:16.330761 systemd[1]: Started sshd@8-10.0.0.86:22-10.0.0.1:47556.service - OpenSSH per-connection server daemon (10.0.0.1:47556). Dec 16 12:38:16.403137 sshd[4934]: Accepted publickey for core from 10.0.0.1 port 47556 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:16.404617 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:16.410928 systemd-logind[1482]: New session 9 of user core. Dec 16 12:38:16.417035 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:38:16.576784 sshd[4937]: Connection closed by 10.0.0.1 port 47556 Dec 16 12:38:16.577325 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:16.581555 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:38:16.581788 systemd[1]: sshd@8-10.0.0.86:22-10.0.0.1:47556.service: Deactivated successfully. Dec 16 12:38:16.583735 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:38:16.588169 systemd-logind[1482]: Removed session 9. Dec 16 12:38:21.591724 systemd[1]: Started sshd@9-10.0.0.86:22-10.0.0.1:55294.service - OpenSSH per-connection server daemon (10.0.0.1:55294). Dec 16 12:38:21.661240 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 55294 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:21.663743 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:21.671495 systemd-logind[1482]: New session 10 of user core. Dec 16 12:38:21.679195 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:38:21.805020 sshd[4968]: Connection closed by 10.0.0.1 port 55294 Dec 16 12:38:21.805364 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:21.818594 systemd[1]: sshd@9-10.0.0.86:22-10.0.0.1:55294.service: Deactivated successfully. Dec 16 12:38:21.821556 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:38:21.822597 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:38:21.825784 systemd[1]: Started sshd@10-10.0.0.86:22-10.0.0.1:55302.service - OpenSSH per-connection server daemon (10.0.0.1:55302). Dec 16 12:38:21.826420 systemd-logind[1482]: Removed session 10. Dec 16 12:38:21.892273 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 55302 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:21.893721 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:21.898188 systemd-logind[1482]: New session 11 of user core. Dec 16 12:38:21.909079 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:38:22.109797 sshd[4985]: Connection closed by 10.0.0.1 port 55302 Dec 16 12:38:22.111312 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:22.122587 systemd[1]: sshd@10-10.0.0.86:22-10.0.0.1:55302.service: Deactivated successfully. Dec 16 12:38:22.129951 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:38:22.131482 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:38:22.135393 systemd[1]: Started sshd@11-10.0.0.86:22-10.0.0.1:55308.service - OpenSSH per-connection server daemon (10.0.0.1:55308). Dec 16 12:38:22.141142 systemd-logind[1482]: Removed session 11. Dec 16 12:38:22.208842 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 55308 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:22.210331 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:22.215531 systemd-logind[1482]: New session 12 of user core. Dec 16 12:38:22.224014 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:38:22.364700 sshd[5000]: Connection closed by 10.0.0.1 port 55308 Dec 16 12:38:22.365059 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:22.368518 systemd[1]: sshd@11-10.0.0.86:22-10.0.0.1:55308.service: Deactivated successfully. Dec 16 12:38:22.371525 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:38:22.372335 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:38:22.373528 systemd-logind[1482]: Removed session 12. Dec 16 12:38:23.887280 containerd[1499]: time="2025-12-16T12:38:23.887143535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:38:24.112130 containerd[1499]: time="2025-12-16T12:38:24.112084329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:24.113864 containerd[1499]: time="2025-12-16T12:38:24.113730327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:38:24.113864 containerd[1499]: time="2025-12-16T12:38:24.113764328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:24.113993 kubelet[2635]: E1216 12:38:24.113922 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:38:24.113993 kubelet[2635]: E1216 12:38:24.113970 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:38:24.114294 kubelet[2635]: E1216 12:38:24.114083 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jxxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2h7z8_calico-system(9321e5e2-d99b-4289-9374-6508cb7284e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:24.115360 kubelet[2635]: E1216 12:38:24.115246 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:38:24.882177 containerd[1499]: time="2025-12-16T12:38:24.882098697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:38:25.094340 containerd[1499]: time="2025-12-16T12:38:25.094284659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:25.107769 containerd[1499]: time="2025-12-16T12:38:25.107687730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:38:25.107907 containerd[1499]: time="2025-12-16T12:38:25.107790052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:25.108030 kubelet[2635]: E1216 12:38:25.107974 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:25.108030 kubelet[2635]: E1216 12:38:25.108026 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:25.108205 kubelet[2635]: E1216 12:38:25.108151 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fzm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d47ffd8c-qf5gh_calico-apiserver(58c1d842-0739-4b32-93b5-118e08e0afe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:25.109718 kubelet[2635]: E1216 12:38:25.109649 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:38:25.882398 containerd[1499]: time="2025-12-16T12:38:25.882307735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:38:25.882764 kubelet[2635]: E1216 12:38:25.882696 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74989f6bd6-bfk22" podUID="3f992751-c3f5-48d0-b7ab-cb9e9ad078f7" Dec 16 12:38:26.100407 containerd[1499]: time="2025-12-16T12:38:26.100286419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:26.101478 containerd[1499]: time="2025-12-16T12:38:26.101430125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:38:26.101586 containerd[1499]: time="2025-12-16T12:38:26.101522208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:38:26.101692 kubelet[2635]: E1216 12:38:26.101642 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:26.101752 kubelet[2635]: E1216 12:38:26.101693 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:38:26.101888 kubelet[2635]: E1216 12:38:26.101838 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5gtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9d47ffd8c-g75vj_calico-apiserver(12f0abbf-eb53-4a17-a66e-13f45ba59ea7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:26.103073 kubelet[2635]: E1216 12:38:26.102969 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:38:26.882007 containerd[1499]: time="2025-12-16T12:38:26.881914807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:38:27.095305 containerd[1499]: time="2025-12-16T12:38:27.095231295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:27.098100 containerd[1499]: time="2025-12-16T12:38:27.098036359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:38:27.098100 containerd[1499]: time="2025-12-16T12:38:27.098079880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:38:27.098357 kubelet[2635]: E1216 12:38:27.098237 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:38:27.098357 kubelet[2635]: E1216 12:38:27.098283 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:38:27.098629 kubelet[2635]: E1216 12:38:27.098391 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fthcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:27.102902 containerd[1499]: time="2025-12-16T12:38:27.101476317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:38:27.287554 containerd[1499]: time="2025-12-16T12:38:27.287412756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:27.288834 containerd[1499]: time="2025-12-16T12:38:27.288741866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:38:27.288834 containerd[1499]: time="2025-12-16T12:38:27.288836508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:38:27.289198 kubelet[2635]: E1216 12:38:27.289159 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:38:27.289449 kubelet[2635]: E1216 12:38:27.289301 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:38:27.289540 kubelet[2635]: E1216 12:38:27.289426 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fthcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8sdm5_calico-system(d7fa23f6-6f8a-4051-b7ef-6f427f114167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:27.290868 kubelet[2635]: E1216 12:38:27.290778 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:38:27.377770 systemd[1]: Started sshd@12-10.0.0.86:22-10.0.0.1:55314.service - OpenSSH per-connection server daemon (10.0.0.1:55314). Dec 16 12:38:27.444784 sshd[5018]: Accepted publickey for core from 10.0.0.1 port 55314 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:27.446183 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:27.451101 systemd-logind[1482]: New session 13 of user core. Dec 16 12:38:27.461025 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:38:27.608962 sshd[5021]: Connection closed by 10.0.0.1 port 55314 Dec 16 12:38:27.609501 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:27.617269 systemd[1]: sshd@12-10.0.0.86:22-10.0.0.1:55314.service: Deactivated successfully. Dec 16 12:38:27.619211 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:38:27.621912 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:38:27.624224 systemd[1]: Started sshd@13-10.0.0.86:22-10.0.0.1:55318.service - OpenSSH per-connection server daemon (10.0.0.1:55318). Dec 16 12:38:27.625479 systemd-logind[1482]: Removed session 13. Dec 16 12:38:27.696244 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 55318 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:27.697642 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:27.702018 systemd-logind[1482]: New session 14 of user core. Dec 16 12:38:27.716000 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:38:27.938342 sshd[5038]: Connection closed by 10.0.0.1 port 55318 Dec 16 12:38:27.938712 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:27.951290 systemd[1]: sshd@13-10.0.0.86:22-10.0.0.1:55318.service: Deactivated successfully. Dec 16 12:38:27.954524 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:38:27.955465 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:38:27.958142 systemd[1]: Started sshd@14-10.0.0.86:22-10.0.0.1:55332.service - OpenSSH per-connection server daemon (10.0.0.1:55332). Dec 16 12:38:27.958955 systemd-logind[1482]: Removed session 14. Dec 16 12:38:28.023360 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 55332 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:28.024934 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:28.029922 systemd-logind[1482]: New session 15 of user core. Dec 16 12:38:28.040047 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:38:28.618930 sshd[5052]: Connection closed by 10.0.0.1 port 55332 Dec 16 12:38:28.619281 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:28.627630 systemd[1]: sshd@14-10.0.0.86:22-10.0.0.1:55332.service: Deactivated successfully. Dec 16 12:38:28.633174 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:38:28.638370 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:38:28.642236 systemd[1]: Started sshd@15-10.0.0.86:22-10.0.0.1:55334.service - OpenSSH per-connection server daemon (10.0.0.1:55334). Dec 16 12:38:28.644284 systemd-logind[1482]: Removed session 15. Dec 16 12:38:28.716874 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 55334 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:28.720472 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:28.725904 systemd-logind[1482]: New session 16 of user core. Dec 16 12:38:28.736057 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:38:29.016354 sshd[5073]: Connection closed by 10.0.0.1 port 55334 Dec 16 12:38:29.017024 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:29.030514 systemd[1]: sshd@15-10.0.0.86:22-10.0.0.1:55334.service: Deactivated successfully. Dec 16 12:38:29.034669 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:38:29.037506 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:38:29.041383 systemd[1]: Started sshd@16-10.0.0.86:22-10.0.0.1:55350.service - OpenSSH per-connection server daemon (10.0.0.1:55350). Dec 16 12:38:29.043067 systemd-logind[1482]: Removed session 16. Dec 16 12:38:29.103893 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 55350 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:29.105261 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:29.109855 systemd-logind[1482]: New session 17 of user core. Dec 16 12:38:29.121053 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:38:29.268836 sshd[5093]: Connection closed by 10.0.0.1 port 55350 Dec 16 12:38:29.268653 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:29.272932 systemd[1]: sshd@16-10.0.0.86:22-10.0.0.1:55350.service: Deactivated successfully. Dec 16 12:38:29.275602 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:38:29.276777 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:38:29.278089 systemd-logind[1482]: Removed session 17. Dec 16 12:38:30.882149 containerd[1499]: time="2025-12-16T12:38:30.881577615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:38:31.095913 containerd[1499]: time="2025-12-16T12:38:31.095835058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:31.098812 containerd[1499]: time="2025-12-16T12:38:31.098749577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:38:31.098920 containerd[1499]: time="2025-12-16T12:38:31.098806217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:38:31.100169 kubelet[2635]: E1216 12:38:31.100110 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:38:31.100595 kubelet[2635]: E1216 12:38:31.100168 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:38:31.100595 kubelet[2635]: E1216 12:38:31.100306 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55lr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4df4cc-zrxc7_calico-system(7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:31.101888 kubelet[2635]: E1216 12:38:31.101801 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:38:34.281348 systemd[1]: Started sshd@17-10.0.0.86:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). Dec 16 12:38:34.357985 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:34.359579 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:34.363960 systemd-logind[1482]: New session 18 of user core. Dec 16 12:38:34.377050 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:38:34.528753 sshd[5117]: Connection closed by 10.0.0.1 port 60218 Dec 16 12:38:34.529101 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:34.532731 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:38:34.533174 systemd[1]: sshd@17-10.0.0.86:22-10.0.0.1:60218.service: Deactivated successfully. Dec 16 12:38:34.535073 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:38:34.537493 systemd-logind[1482]: Removed session 18. Dec 16 12:38:35.895568 kubelet[2635]: E1216 12:38:35.895093 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2h7z8" podUID="9321e5e2-d99b-4289-9374-6508cb7284e1" Dec 16 12:38:37.882379 kubelet[2635]: E1216 12:38:37.882321 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-qf5gh" podUID="58c1d842-0739-4b32-93b5-118e08e0afe6" Dec 16 12:38:39.541398 systemd[1]: Started sshd@18-10.0.0.86:22-10.0.0.1:60220.service - OpenSSH per-connection server daemon (10.0.0.1:60220). Dec 16 12:38:39.612122 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 60220 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:39.614771 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:39.621761 systemd-logind[1482]: New session 19 of user core. Dec 16 12:38:39.628048 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:38:39.772062 sshd[5159]: Connection closed by 10.0.0.1 port 60220 Dec 16 12:38:39.772373 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:39.775692 systemd[1]: sshd@18-10.0.0.86:22-10.0.0.1:60220.service: Deactivated successfully. Dec 16 12:38:39.777383 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:38:39.779309 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:38:39.780647 systemd-logind[1482]: Removed session 19. Dec 16 12:38:39.881985 kubelet[2635]: E1216 12:38:39.881870 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9d47ffd8c-g75vj" podUID="12f0abbf-eb53-4a17-a66e-13f45ba59ea7" Dec 16 12:38:39.882761 containerd[1499]: time="2025-12-16T12:38:39.882506062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:38:40.094428 containerd[1499]: time="2025-12-16T12:38:40.094380643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:40.095444 containerd[1499]: time="2025-12-16T12:38:40.095408727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:38:40.095536 containerd[1499]: time="2025-12-16T12:38:40.095483808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:38:40.095654 kubelet[2635]: E1216 12:38:40.095618 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:40.095694 kubelet[2635]: E1216 12:38:40.095666 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:38:40.095833 kubelet[2635]: E1216 12:38:40.095776 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:907bd910ce864eb68ce15c1c07b8d8bb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:40.097703 containerd[1499]: time="2025-12-16T12:38:40.097679537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:38:40.305685 containerd[1499]: time="2025-12-16T12:38:40.305065110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:38:40.306925 containerd[1499]: time="2025-12-16T12:38:40.306774357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:38:40.306925 containerd[1499]: time="2025-12-16T12:38:40.306895357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:38:40.307332 kubelet[2635]: E1216 12:38:40.307201 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:40.307332 kubelet[2635]: E1216 12:38:40.307251 2635 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:38:40.307455 kubelet[2635]: E1216 12:38:40.307377 2635 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm7rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74989f6bd6-bfk22_calico-system(3f992751-c3f5-48d0-b7ab-cb9e9ad078f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:38:40.309684 kubelet[2635]: E1216 12:38:40.309638 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74989f6bd6-bfk22" podUID="3f992751-c3f5-48d0-b7ab-cb9e9ad078f7" Dec 16 12:38:41.884292 kubelet[2635]: E1216 12:38:41.883474 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8sdm5" podUID="d7fa23f6-6f8a-4051-b7ef-6f427f114167" Dec 16 12:38:42.883204 kubelet[2635]: E1216 12:38:42.882381 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4df4cc-zrxc7" podUID="7daf5ab5-6622-4a5d-ba28-10d8ac9c5df7" Dec 16 12:38:44.799704 systemd[1]: Started sshd@19-10.0.0.86:22-10.0.0.1:58224.service - OpenSSH per-connection server daemon (10.0.0.1:58224). Dec 16 12:38:44.856756 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 58224 ssh2: RSA SHA256:J/XE0kfUILM6R4vAQ/VFNBUvzOeHWyvHhn8QzqONTrE Dec 16 12:38:44.858248 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:38:44.862565 systemd-logind[1482]: New session 20 of user core. Dec 16 12:38:44.874026 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:38:45.024862 sshd[5176]: Connection closed by 10.0.0.1 port 58224 Dec 16 12:38:45.024979 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Dec 16 12:38:45.028716 systemd[1]: sshd@19-10.0.0.86:22-10.0.0.1:58224.service: Deactivated successfully. Dec 16 12:38:45.031612 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:38:45.033461 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:38:45.034986 systemd-logind[1482]: Removed session 20.