Oct 29 23:31:35.758129 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 29 23:31:35.758150 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Oct 29 22:07:18 -00 2025 Oct 29 23:31:35.758159 kernel: KASLR enabled Oct 29 23:31:35.758165 kernel: efi: EFI v2.7 by EDK II Oct 29 23:31:35.758170 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 29 23:31:35.758175 kernel: random: crng init done Oct 29 23:31:35.758182 kernel: secureboot: Secure boot disabled Oct 29 23:31:35.758188 kernel: ACPI: Early table checksum verification disabled Oct 29 23:31:35.758194 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 29 23:31:35.758200 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 29 23:31:35.758206 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758212 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758218 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758224 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758231 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758251 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758258 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758264 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758270 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 23:31:35.758276 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 29 23:31:35.758282 kernel: ACPI: Use ACPI SPCR as default console: No Oct 29 23:31:35.758288 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:31:35.758294 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 29 23:31:35.758303 kernel: Zone ranges: Oct 29 23:31:35.758311 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:31:35.758321 kernel: DMA32 empty Oct 29 23:31:35.758327 kernel: Normal empty Oct 29 23:31:35.758333 kernel: Device empty Oct 29 23:31:35.758339 kernel: Movable zone start for each node Oct 29 23:31:35.758345 kernel: Early memory node ranges Oct 29 23:31:35.758352 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 29 23:31:35.758358 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 29 23:31:35.758364 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 29 23:31:35.758369 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 29 23:31:35.758376 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 29 23:31:35.758382 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 29 23:31:35.758388 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 29 23:31:35.758395 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 29 23:31:35.758401 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 29 23:31:35.758407 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 29 23:31:35.758415 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 29 23:31:35.758422 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 29 23:31:35.758428 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 29 23:31:35.758436 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 23:31:35.758443 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 29 23:31:35.758449 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 29 23:31:35.758455 kernel: psci: probing for conduit method from ACPI. Oct 29 23:31:35.758462 kernel: psci: PSCIv1.1 detected in firmware. Oct 29 23:31:35.758468 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 23:31:35.758474 kernel: psci: Trusted OS migration not required Oct 29 23:31:35.758481 kernel: psci: SMC Calling Convention v1.1 Oct 29 23:31:35.758488 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 29 23:31:35.758494 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 29 23:31:35.758502 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 29 23:31:35.758509 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 29 23:31:35.758515 kernel: Detected PIPT I-cache on CPU0 Oct 29 23:31:35.758522 kernel: CPU features: detected: GIC system register CPU interface Oct 29 23:31:35.758528 kernel: CPU features: detected: Spectre-v4 Oct 29 23:31:35.758535 kernel: CPU features: detected: Spectre-BHB Oct 29 23:31:35.758541 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 29 23:31:35.758548 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 29 23:31:35.758554 kernel: CPU features: detected: ARM erratum 1418040 Oct 29 23:31:35.758561 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 29 23:31:35.758567 kernel: alternatives: applying boot alternatives Oct 29 23:31:35.758575 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1714a6d4d6c76fbe0af2166549be0df85ee0260f299bb3baeaf286f50f12863 Oct 29 23:31:35.758583 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 23:31:35.758590 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 23:31:35.758596 kernel: Fallback order for Node 0: 0 Oct 29 23:31:35.758603 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 29 23:31:35.758609 kernel: Policy zone: DMA Oct 29 23:31:35.758615 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 23:31:35.758622 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 29 23:31:35.758628 kernel: software IO TLB: area num 4. Oct 29 23:31:35.758635 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 29 23:31:35.758642 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 29 23:31:35.758648 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 23:31:35.758656 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 23:31:35.758664 kernel: rcu: RCU event tracing is enabled. Oct 29 23:31:35.758670 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 23:31:35.758677 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 23:31:35.758683 kernel: Tracing variant of Tasks RCU enabled. Oct 29 23:31:35.758690 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 23:31:35.758696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 23:31:35.758703 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:31:35.758709 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 29 23:31:35.758731 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 23:31:35.758738 kernel: GICv3: 256 SPIs implemented Oct 29 23:31:35.758746 kernel: GICv3: 0 Extended SPIs implemented Oct 29 23:31:35.758753 kernel: Root IRQ handler: gic_handle_irq Oct 29 23:31:35.758760 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 29 23:31:35.758766 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 29 23:31:35.758772 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 29 23:31:35.758778 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 29 23:31:35.758785 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 29 23:31:35.758792 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 29 23:31:35.758798 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 29 23:31:35.758805 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 29 23:31:35.758811 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 23:31:35.758817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:31:35.758825 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 29 23:31:35.758832 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 29 23:31:35.758839 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 29 23:31:35.758845 kernel: arm-pv: using stolen time PV Oct 29 23:31:35.758852 kernel: Console: colour dummy device 80x25 Oct 29 23:31:35.758859 kernel: ACPI: Core revision 20240827 Oct 29 23:31:35.758866 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 29 23:31:35.758872 kernel: pid_max: default: 32768 minimum: 301 Oct 29 23:31:35.758879 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 23:31:35.758886 kernel: landlock: Up and running. Oct 29 23:31:35.758894 kernel: SELinux: Initializing. Oct 29 23:31:35.758900 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:31:35.758907 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:31:35.758913 kernel: rcu: Hierarchical SRCU implementation. Oct 29 23:31:35.758920 kernel: rcu: Max phase no-delay instances is 400. Oct 29 23:31:35.758927 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 23:31:35.758933 kernel: Remapping and enabling EFI services. Oct 29 23:31:35.758940 kernel: smp: Bringing up secondary CPUs ... Oct 29 23:31:35.758947 kernel: Detected PIPT I-cache on CPU1 Oct 29 23:31:35.758959 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 29 23:31:35.758966 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 29 23:31:35.758973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:31:35.758981 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 29 23:31:35.758988 kernel: Detected PIPT I-cache on CPU2 Oct 29 23:31:35.758995 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 29 23:31:35.759002 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 29 23:31:35.759009 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:31:35.759017 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 29 23:31:35.759024 kernel: Detected PIPT I-cache on CPU3 Oct 29 23:31:35.759031 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 29 23:31:35.759038 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 29 23:31:35.759050 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 23:31:35.759058 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 29 23:31:35.759066 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 23:31:35.759072 kernel: SMP: Total of 4 processors activated. Oct 29 23:31:35.759079 kernel: CPU: All CPU(s) started at EL1 Oct 29 23:31:35.759088 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 23:31:35.759095 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 29 23:31:35.759102 kernel: CPU features: detected: Common not Private translations Oct 29 23:31:35.759109 kernel: CPU features: detected: CRC32 instructions Oct 29 23:31:35.759117 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 29 23:31:35.759124 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 29 23:31:35.759131 kernel: CPU features: detected: LSE atomic instructions Oct 29 23:31:35.759138 kernel: CPU features: detected: Privileged Access Never Oct 29 23:31:35.759144 kernel: CPU features: detected: RAS Extension Support Oct 29 23:31:35.759153 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 29 23:31:35.759160 kernel: alternatives: applying system-wide alternatives Oct 29 23:31:35.759167 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 29 23:31:35.759174 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Oct 29 23:31:35.759181 kernel: devtmpfs: initialized Oct 29 23:31:35.759188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 23:31:35.759195 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 23:31:35.759202 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 29 23:31:35.759209 kernel: 0 pages in range for non-PLT usage Oct 29 23:31:35.759217 kernel: 508560 pages in range for PLT usage Oct 29 23:31:35.759224 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 23:31:35.759231 kernel: SMBIOS 3.0.0 present. Oct 29 23:31:35.759243 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 29 23:31:35.759250 kernel: DMI: Memory slots populated: 1/1 Oct 29 23:31:35.759257 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 23:31:35.759264 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 23:31:35.759271 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 23:31:35.759278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 23:31:35.759286 kernel: audit: initializing netlink subsys (disabled) Oct 29 23:31:35.759293 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Oct 29 23:31:35.759300 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 23:31:35.759307 kernel: cpuidle: using governor menu Oct 29 23:31:35.759314 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 23:31:35.759321 kernel: ASID allocator initialised with 32768 entries Oct 29 23:31:35.759328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 23:31:35.759335 kernel: Serial: AMBA PL011 UART driver Oct 29 23:31:35.759342 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 23:31:35.759350 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 23:31:35.759357 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 23:31:35.759364 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 29 23:31:35.759371 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 23:31:35.759378 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 23:31:35.759385 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 23:31:35.759392 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 29 23:31:35.759398 kernel: ACPI: Added _OSI(Module Device) Oct 29 23:31:35.759405 kernel: ACPI: Added _OSI(Processor Device) Oct 29 23:31:35.759413 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 23:31:35.759420 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 23:31:35.759427 kernel: ACPI: Interpreter enabled Oct 29 23:31:35.759434 kernel: ACPI: Using GIC for interrupt routing Oct 29 23:31:35.759441 kernel: ACPI: MCFG table detected, 1 entries Oct 29 23:31:35.759448 kernel: ACPI: CPU0 has been hot-added Oct 29 23:31:35.759455 kernel: ACPI: CPU1 has been hot-added Oct 29 23:31:35.759461 kernel: ACPI: CPU2 has been hot-added Oct 29 23:31:35.759468 kernel: ACPI: CPU3 has been hot-added Oct 29 23:31:35.759475 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 29 23:31:35.759483 kernel: printk: legacy console [ttyAMA0] enabled Oct 29 23:31:35.759490 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 23:31:35.759623 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 23:31:35.759691 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 23:31:35.759753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 23:31:35.759810 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 29 23:31:35.759866 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 29 23:31:35.759877 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 29 23:31:35.759884 kernel: PCI host bridge to bus 0000:00 Oct 29 23:31:35.759946 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 29 23:31:35.760000 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 23:31:35.760059 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 29 23:31:35.760111 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 23:31:35.760205 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 29 23:31:35.760290 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 29 23:31:35.760351 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 29 23:31:35.760410 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 29 23:31:35.760468 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 23:31:35.760525 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 29 23:31:35.760583 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 29 23:31:35.760643 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 29 23:31:35.760698 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 29 23:31:35.760749 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 23:31:35.760801 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 29 23:31:35.760810 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 23:31:35.760817 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 23:31:35.760824 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 23:31:35.760831 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 23:31:35.760840 kernel: iommu: Default domain type: Translated Oct 29 23:31:35.760847 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 23:31:35.760854 kernel: efivars: Registered efivars operations Oct 29 23:31:35.760861 kernel: vgaarb: loaded Oct 29 23:31:35.760868 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 23:31:35.760875 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 23:31:35.760882 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 23:31:35.760889 kernel: pnp: PnP ACPI init Oct 29 23:31:35.760952 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 29 23:31:35.760963 kernel: pnp: PnP ACPI: found 1 devices Oct 29 23:31:35.760970 kernel: NET: Registered PF_INET protocol family Oct 29 23:31:35.760977 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 23:31:35.760985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 23:31:35.760992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 23:31:35.760999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 23:31:35.761006 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 23:31:35.761013 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 23:31:35.761022 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:31:35.761029 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:31:35.761036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 23:31:35.761043 kernel: PCI: CLS 0 bytes, default 64 Oct 29 23:31:35.761059 kernel: kvm [1]: HYP mode not available Oct 29 23:31:35.761067 kernel: Initialise system trusted keyrings Oct 29 23:31:35.761074 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 23:31:35.761081 kernel: Key type asymmetric registered Oct 29 23:31:35.761088 kernel: Asymmetric key parser 'x509' registered Oct 29 23:31:35.761096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 23:31:35.761103 kernel: io scheduler mq-deadline registered Oct 29 23:31:35.761110 kernel: io scheduler kyber registered Oct 29 23:31:35.761117 kernel: io scheduler bfq registered Oct 29 23:31:35.761124 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 23:31:35.761131 kernel: ACPI: button: Power Button [PWRB] Oct 29 23:31:35.761138 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 23:31:35.761204 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 29 23:31:35.761213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 23:31:35.761222 kernel: thunder_xcv, ver 1.0 Oct 29 23:31:35.761229 kernel: thunder_bgx, ver 1.0 Oct 29 23:31:35.761287 kernel: nicpf, ver 1.0 Oct 29 23:31:35.761296 kernel: nicvf, ver 1.0 Oct 29 23:31:35.761372 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 23:31:35.761429 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T23:31:35 UTC (1761780695) Oct 29 23:31:35.761439 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 23:31:35.761446 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 29 23:31:35.761455 kernel: watchdog: NMI not fully supported Oct 29 23:31:35.761462 kernel: watchdog: Hard watchdog permanently disabled Oct 29 23:31:35.761469 kernel: NET: Registered PF_INET6 protocol family Oct 29 23:31:35.761476 kernel: Segment Routing with IPv6 Oct 29 23:31:35.761483 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 23:31:35.761490 kernel: NET: Registered PF_PACKET protocol family Oct 29 23:31:35.761497 kernel: Key type dns_resolver registered Oct 29 23:31:35.761504 kernel: registered taskstats version 1 Oct 29 23:31:35.761511 kernel: Loading compiled-in X.509 certificates Oct 29 23:31:35.761518 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 7e3febc5e0a8b643b4690bc3ed5e79b236e1ccf8' Oct 29 23:31:35.761526 kernel: Demotion targets for Node 0: null Oct 29 23:31:35.761533 kernel: Key type .fscrypt registered Oct 29 23:31:35.761540 kernel: Key type fscrypt-provisioning registered Oct 29 23:31:35.761546 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 23:31:35.761553 kernel: ima: Allocated hash algorithm: sha1 Oct 29 23:31:35.761560 kernel: ima: No architecture policies found Oct 29 23:31:35.761567 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 23:31:35.761574 kernel: clk: Disabling unused clocks Oct 29 23:31:35.761581 kernel: PM: genpd: Disabling unused power domains Oct 29 23:31:35.761589 kernel: Warning: unable to open an initial console. Oct 29 23:31:35.761596 kernel: Freeing unused kernel memory: 38976K Oct 29 23:31:35.761603 kernel: Run /init as init process Oct 29 23:31:35.761610 kernel: with arguments: Oct 29 23:31:35.761617 kernel: /init Oct 29 23:31:35.761623 kernel: with environment: Oct 29 23:31:35.761630 kernel: HOME=/ Oct 29 23:31:35.761637 kernel: TERM=linux Oct 29 23:31:35.761645 systemd[1]: Successfully made /usr/ read-only. Oct 29 23:31:35.761656 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:31:35.761664 systemd[1]: Detected virtualization kvm. Oct 29 23:31:35.761672 systemd[1]: Detected architecture arm64. Oct 29 23:31:35.761679 systemd[1]: Running in initrd. Oct 29 23:31:35.761686 systemd[1]: No hostname configured, using default hostname. Oct 29 23:31:35.761694 systemd[1]: Hostname set to . Oct 29 23:31:35.761701 systemd[1]: Initializing machine ID from VM UUID. Oct 29 23:31:35.761710 systemd[1]: Queued start job for default target initrd.target. Oct 29 23:31:35.761717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:31:35.761725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:31:35.761733 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 23:31:35.761740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:31:35.761748 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 23:31:35.761756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 23:31:35.761766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 29 23:31:35.761773 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 29 23:31:35.761781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:31:35.761789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:31:35.761796 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:31:35.761804 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:31:35.761811 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:31:35.761818 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:31:35.761827 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:31:35.761835 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:31:35.761842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 23:31:35.761850 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 23:31:35.761857 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:31:35.761865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:31:35.761872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:31:35.761880 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:31:35.761887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 23:31:35.761896 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:31:35.761903 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 23:31:35.761911 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 23:31:35.761919 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 23:31:35.761926 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:31:35.761934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:31:35.761941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:31:35.761948 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 23:31:35.761958 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:31:35.761965 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 23:31:35.761986 systemd-journald[243]: Collecting audit messages is disabled. Oct 29 23:31:35.762006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:31:35.762015 systemd-journald[243]: Journal started Oct 29 23:31:35.762032 systemd-journald[243]: Runtime Journal (/run/log/journal/c51f10cc45464405bd6b462cfd574ab1) is 6M, max 48.5M, 42.4M free. Oct 29 23:31:35.771321 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 23:31:35.771344 kernel: Bridge firewalling registered Oct 29 23:31:35.754324 systemd-modules-load[245]: Inserted module 'overlay' Oct 29 23:31:35.770470 systemd-modules-load[245]: Inserted module 'br_netfilter' Oct 29 23:31:35.775968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:31:35.775987 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:31:35.777324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:31:35.778572 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:31:35.782897 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 23:31:35.784710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:31:35.786668 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:31:35.798674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:31:35.804851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:31:35.808300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:31:35.810897 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 23:31:35.813570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:31:35.816376 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:31:35.818828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:31:35.820886 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 23:31:35.851251 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1714a6d4d6c76fbe0af2166549be0df85ee0260f299bb3baeaf286f50f12863 Oct 29 23:31:35.865543 systemd-resolved[288]: Positive Trust Anchors: Oct 29 23:31:35.865561 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:31:35.865591 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:31:35.870226 systemd-resolved[288]: Defaulting to hostname 'linux'. Oct 29 23:31:35.871192 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:31:35.875294 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:31:35.921290 kernel: SCSI subsystem initialized Oct 29 23:31:35.926281 kernel: Loading iSCSI transport class v2.0-870. Oct 29 23:31:35.933288 kernel: iscsi: registered transport (tcp) Oct 29 23:31:35.946270 kernel: iscsi: registered transport (qla4xxx) Oct 29 23:31:35.946289 kernel: QLogic iSCSI HBA Driver Oct 29 23:31:35.962124 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:31:35.977360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:31:35.978881 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:31:36.025226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 23:31:36.028417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 23:31:36.082279 kernel: raid6: neonx8 gen() 15771 MB/s Oct 29 23:31:36.099265 kernel: raid6: neonx4 gen() 15796 MB/s Oct 29 23:31:36.116262 kernel: raid6: neonx2 gen() 13196 MB/s Oct 29 23:31:36.133271 kernel: raid6: neonx1 gen() 10395 MB/s Oct 29 23:31:36.150262 kernel: raid6: int64x8 gen() 6886 MB/s Oct 29 23:31:36.167271 kernel: raid6: int64x4 gen() 7343 MB/s Oct 29 23:31:36.184263 kernel: raid6: int64x2 gen() 6096 MB/s Oct 29 23:31:36.203272 kernel: raid6: int64x1 gen() 5041 MB/s Oct 29 23:31:36.203319 kernel: raid6: using algorithm neonx4 gen() 15796 MB/s Oct 29 23:31:36.219429 kernel: raid6: .... xor() 12331 MB/s, rmw enabled Oct 29 23:31:36.219460 kernel: raid6: using neon recovery algorithm Oct 29 23:31:36.225287 kernel: xor: measuring software checksum speed Oct 29 23:31:36.225329 kernel: 8regs : 21579 MB/sec Oct 29 23:31:36.226586 kernel: 32regs : 19070 MB/sec Oct 29 23:31:36.226619 kernel: arm64_neon : 27993 MB/sec Oct 29 23:31:36.226629 kernel: xor: using function: arm64_neon (27993 MB/sec) Oct 29 23:31:36.279285 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 23:31:36.285029 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:31:36.287637 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:31:36.320879 systemd-udevd[501]: Using default interface naming scheme 'v255'. Oct 29 23:31:36.324929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:31:36.327002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 23:31:36.349330 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Oct 29 23:31:36.371966 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:31:36.374337 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:31:36.429570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:31:36.432597 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 23:31:36.491868 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 29 23:31:36.492054 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 29 23:31:36.497202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:31:36.497410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:31:36.506437 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 23:31:36.506459 kernel: GPT:9289727 != 19775487 Oct 29 23:31:36.506469 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 23:31:36.506478 kernel: GPT:9289727 != 19775487 Oct 29 23:31:36.506493 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 23:31:36.506503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 23:31:36.502672 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:31:36.509103 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:31:36.534705 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 29 23:31:36.536592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:31:36.544456 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 23:31:36.553835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 29 23:31:36.565271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:31:36.571359 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 29 23:31:36.572558 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 29 23:31:36.575517 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:31:36.577744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:31:36.579900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:31:36.582701 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 23:31:36.584501 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 23:31:36.612204 disk-uuid[595]: Primary Header is updated. Oct 29 23:31:36.612204 disk-uuid[595]: Secondary Entries is updated. Oct 29 23:31:36.612204 disk-uuid[595]: Secondary Header is updated. Oct 29 23:31:36.616264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 23:31:36.616884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:31:37.623264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 23:31:37.624582 disk-uuid[598]: The operation has completed successfully. Oct 29 23:31:37.648098 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 23:31:37.648198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 23:31:37.671838 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 29 23:31:37.689204 sh[616]: Success Oct 29 23:31:37.701264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 23:31:37.703191 kernel: device-mapper: uevent: version 1.0.3 Oct 29 23:31:37.703233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 23:31:37.710471 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 29 23:31:37.734983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:31:37.737769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 29 23:31:37.752682 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 29 23:31:37.759254 kernel: BTRFS: device fsid fb1de99b-69c1-4598-af66-3a61dd29143e devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (628) Oct 29 23:31:37.761659 kernel: BTRFS info (device dm-0): first mount of filesystem fb1de99b-69c1-4598-af66-3a61dd29143e Oct 29 23:31:37.761688 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:31:37.765687 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 23:31:37.765722 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 23:31:37.766832 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 29 23:31:37.768193 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:31:37.769621 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 23:31:37.770374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 23:31:37.771910 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 23:31:37.799921 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (661) Oct 29 23:31:37.799973 kernel: BTRFS info (device vda6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:31:37.801313 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:31:37.804395 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:31:37.804441 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:31:37.809254 kernel: BTRFS info (device vda6): last unmount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:31:37.810168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 23:31:37.812429 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 23:31:37.870893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:31:37.876672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:31:37.916095 systemd-networkd[802]: lo: Link UP Oct 29 23:31:37.917008 systemd-networkd[802]: lo: Gained carrier Oct 29 23:31:37.917811 systemd-networkd[802]: Enumeration completed Oct 29 23:31:37.917971 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:31:37.919990 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:31:37.923524 ignition[704]: Ignition 2.22.0 Oct 29 23:31:37.919995 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:31:37.923530 ignition[704]: Stage: fetch-offline Oct 29 23:31:37.920254 systemd[1]: Reached target network.target - Network. Oct 29 23:31:37.923564 ignition[704]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:37.920807 systemd-networkd[802]: eth0: Link UP Oct 29 23:31:37.923572 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:37.921386 systemd-networkd[802]: eth0: Gained carrier Oct 29 23:31:37.923669 ignition[704]: parsed url from cmdline: "" Oct 29 23:31:37.921398 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:31:37.923672 ignition[704]: no config URL provided Oct 29 23:31:37.923676 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 23:31:37.923682 ignition[704]: no config at "/usr/lib/ignition/user.ign" Oct 29 23:31:37.938287 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:31:37.923710 ignition[704]: op(1): [started] loading QEMU firmware config module Oct 29 23:31:37.923714 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 23:31:37.929026 ignition[704]: op(1): [finished] loading QEMU firmware config module Oct 29 23:31:37.982305 ignition[704]: parsing config with SHA512: e9f746a76235171012c107e8f92fab0b516fa629ee7b784cbdd8a21ceabd8a0554ca0cee80afa5613c60b7fcb52e47a1bc2c44c23814a608d14297d72fe41150 Oct 29 23:31:37.987195 unknown[704]: fetched base config from "system" Oct 29 23:31:37.987207 unknown[704]: fetched user config from "qemu" Oct 29 23:31:37.987588 ignition[704]: fetch-offline: fetch-offline passed Oct 29 23:31:37.989520 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:31:37.987650 ignition[704]: Ignition finished successfully Oct 29 23:31:37.990974 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 23:31:37.991783 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 23:31:38.026483 ignition[816]: Ignition 2.22.0 Oct 29 23:31:38.026500 ignition[816]: Stage: kargs Oct 29 23:31:38.026629 ignition[816]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:38.026637 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:38.027395 ignition[816]: kargs: kargs passed Oct 29 23:31:38.027440 ignition[816]: Ignition finished successfully Oct 29 23:31:38.032855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 23:31:38.034854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 23:31:38.066615 ignition[824]: Ignition 2.22.0 Oct 29 23:31:38.066636 ignition[824]: Stage: disks Oct 29 23:31:38.066771 ignition[824]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:38.069790 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 23:31:38.066779 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:38.071088 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 23:31:38.067552 ignition[824]: disks: disks passed Oct 29 23:31:38.072849 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 23:31:38.067597 ignition[824]: Ignition finished successfully Oct 29 23:31:38.074954 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:31:38.076840 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:31:38.078272 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:31:38.080997 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 23:31:38.119795 systemd-fsck[835]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 29 23:31:38.124609 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 23:31:38.127056 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 23:31:38.195269 kernel: EXT4-fs (vda9): mounted filesystem b8ba1a5d-9c06-458f-b680-11cfeb802ce1 r/w with ordered data mode. Quota mode: none. Oct 29 23:31:38.195288 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 23:31:38.196509 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 23:31:38.198902 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:31:38.200522 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 23:31:38.201518 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 23:31:38.201573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 23:31:38.201597 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:31:38.213780 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 23:31:38.215988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 23:31:38.220822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (843) Oct 29 23:31:38.220850 kernel: BTRFS info (device vda6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:31:38.220860 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:31:38.227878 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:31:38.227960 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:31:38.230614 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:31:38.251791 initrd-setup-root[867]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 23:31:38.255953 initrd-setup-root[874]: cut: /sysroot/etc/group: No such file or directory Oct 29 23:31:38.259966 initrd-setup-root[881]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 23:31:38.263898 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 23:31:38.329775 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 23:31:38.331853 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 23:31:38.333430 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 23:31:38.352258 kernel: BTRFS info (device vda6): last unmount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:31:38.364346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 23:31:38.379218 ignition[956]: INFO : Ignition 2.22.0 Oct 29 23:31:38.379218 ignition[956]: INFO : Stage: mount Oct 29 23:31:38.381888 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:38.381888 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:38.381888 ignition[956]: INFO : mount: mount passed Oct 29 23:31:38.381888 ignition[956]: INFO : Ignition finished successfully Oct 29 23:31:38.382648 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 23:31:38.384600 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 23:31:38.758901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 23:31:38.763747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:31:38.791261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (969) Oct 29 23:31:38.791301 kernel: BTRFS info (device vda6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:31:38.793355 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:31:38.796256 kernel: BTRFS info (device vda6): turning on async discard Oct 29 23:31:38.796285 kernel: BTRFS info (device vda6): enabling free space tree Oct 29 23:31:38.797561 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:31:38.827555 ignition[986]: INFO : Ignition 2.22.0 Oct 29 23:31:38.827555 ignition[986]: INFO : Stage: files Oct 29 23:31:38.829359 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:38.829359 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:38.829359 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Oct 29 23:31:38.829359 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 23:31:38.829359 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 23:31:38.835813 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 23:31:38.835813 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 23:31:38.835813 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 23:31:38.835813 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 23:31:38.835813 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 29 23:31:38.831709 unknown[986]: wrote ssh authorized keys file for user: core Oct 29 23:31:38.999970 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 23:31:39.097406 systemd-networkd[802]: eth0: Gained IPv6LL Oct 29 23:31:39.184603 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:31:39.186860 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 23:31:39.205618 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 29 23:31:39.541326 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 23:31:39.789562 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 29 23:31:39.789562 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 29 23:31:39.793447 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 23:31:39.808334 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:31:39.811126 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 23:31:39.814252 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 23:31:39.814252 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 29 23:31:39.814252 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 23:31:39.814252 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:31:39.814252 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:31:39.814252 ignition[986]: INFO : files: files passed Oct 29 23:31:39.814252 ignition[986]: INFO : Ignition finished successfully Oct 29 23:31:39.815929 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 23:31:39.820202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 23:31:39.822232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 23:31:39.833871 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 23:31:39.833982 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 23:31:39.837109 initrd-setup-root-after-ignition[1016]: grep: /sysroot/oem/oem-release: No such file or directory Oct 29 23:31:39.840049 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:31:39.840049 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:31:39.843535 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:31:39.843967 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:31:39.846658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 23:31:39.848632 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 23:31:39.891369 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 23:31:39.891507 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 23:31:39.893781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 23:31:39.895617 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 23:31:39.897383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 23:31:39.898344 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 23:31:39.930690 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:31:39.933691 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 23:31:39.955528 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:31:39.956848 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:31:39.959025 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 23:31:39.960995 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 23:31:39.961129 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:31:39.963805 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 23:31:39.965984 systemd[1]: Stopped target basic.target - Basic System. Oct 29 23:31:39.967747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 23:31:39.969563 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:31:39.971750 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 23:31:39.973720 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:31:39.975786 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 23:31:39.977706 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:31:39.979830 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 23:31:39.981959 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 23:31:39.983850 systemd[1]: Stopped target swap.target - Swaps. Oct 29 23:31:39.985393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 23:31:39.985524 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:31:39.987877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:31:39.989899 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:31:39.991859 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 23:31:39.995310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:31:39.996640 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 23:31:39.996781 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 23:31:39.999918 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 23:31:40.000048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:31:40.002169 systemd[1]: Stopped target paths.target - Path Units. Oct 29 23:31:40.003924 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 23:31:40.004074 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:31:40.006312 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 23:31:40.008161 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 23:31:40.010138 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 23:31:40.010232 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:31:40.012450 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 23:31:40.012530 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:31:40.014161 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 23:31:40.014301 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:31:40.016168 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 23:31:40.016282 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 23:31:40.018761 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 23:31:40.020412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 23:31:40.020548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:31:40.040886 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 23:31:40.041805 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 23:31:40.041941 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:31:40.043998 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 23:31:40.044125 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:31:40.050335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 23:31:40.050436 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 23:31:40.056022 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 23:31:40.061401 ignition[1043]: INFO : Ignition 2.22.0 Oct 29 23:31:40.061401 ignition[1043]: INFO : Stage: umount Oct 29 23:31:40.063262 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:31:40.063262 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 23:31:40.063262 ignition[1043]: INFO : umount: umount passed Oct 29 23:31:40.063262 ignition[1043]: INFO : Ignition finished successfully Oct 29 23:31:40.064496 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 23:31:40.065308 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 23:31:40.067401 systemd[1]: Stopped target network.target - Network. Oct 29 23:31:40.069450 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 23:31:40.069565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 23:31:40.071263 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 23:31:40.071318 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 23:31:40.073074 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 23:31:40.073128 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 23:31:40.075068 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 23:31:40.075115 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 23:31:40.077021 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 23:31:40.078761 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 23:31:40.087913 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 23:31:40.088039 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 23:31:40.091490 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 29 23:31:40.091733 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 23:31:40.091772 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:31:40.093964 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 29 23:31:40.096043 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 23:31:40.096182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 23:31:40.099524 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 29 23:31:40.099678 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 23:31:40.101050 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 23:31:40.101085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:31:40.106961 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 23:31:40.108884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 23:31:40.108963 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:31:40.111305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 23:31:40.111359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:31:40.115860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 23:31:40.115919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 23:31:40.121135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:31:40.125799 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 23:31:40.126182 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 23:31:40.126304 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 23:31:40.129816 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 23:31:40.129895 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 23:31:40.136936 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 23:31:40.138373 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:31:40.140352 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 23:31:40.140397 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 23:31:40.144454 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 23:31:40.144491 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:31:40.147331 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 23:31:40.147382 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:31:40.150378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 23:31:40.150433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 23:31:40.153218 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 23:31:40.153291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:31:40.156983 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 23:31:40.158456 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 23:31:40.158518 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:31:40.162691 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 23:31:40.162734 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:31:40.165819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:31:40.165868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:31:40.169383 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 23:31:40.169495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 23:31:40.173837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 23:31:40.173926 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 23:31:40.175709 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 23:31:40.178401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 23:31:40.188959 systemd[1]: Switching root. Oct 29 23:31:40.219692 systemd-journald[243]: Journal stopped Oct 29 23:31:41.020402 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Oct 29 23:31:41.020460 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 23:31:41.020471 kernel: SELinux: policy capability open_perms=1 Oct 29 23:31:41.020486 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 23:31:41.020501 kernel: SELinux: policy capability always_check_network=0 Oct 29 23:31:41.020512 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 23:31:41.020522 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 23:31:41.020532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 23:31:41.020542 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 23:31:41.020556 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 23:31:41.020565 kernel: audit: type=1403 audit(1761780700.384:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 23:31:41.020579 systemd[1]: Successfully loaded SELinux policy in 48.292ms. Oct 29 23:31:41.020599 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.428ms. Oct 29 23:31:41.020611 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:31:41.020622 systemd[1]: Detected virtualization kvm. Oct 29 23:31:41.020632 systemd[1]: Detected architecture arm64. Oct 29 23:31:41.020642 systemd[1]: Detected first boot. Oct 29 23:31:41.020656 systemd[1]: Initializing machine ID from VM UUID. Oct 29 23:31:41.020668 zram_generator::config[1088]: No configuration found. Oct 29 23:31:41.020679 kernel: NET: Registered PF_VSOCK protocol family Oct 29 23:31:41.020689 systemd[1]: Populated /etc with preset unit settings. Oct 29 23:31:41.020701 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 29 23:31:41.020711 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 23:31:41.020720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 23:31:41.020730 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 23:31:41.020740 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 23:31:41.020751 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 23:31:41.020761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 23:31:41.020771 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 23:31:41.020781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 23:31:41.020791 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 23:31:41.020801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 23:31:41.020811 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 23:31:41.020821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:31:41.020831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:31:41.020842 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 23:31:41.020852 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 23:31:41.020862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 23:31:41.020872 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:31:41.020882 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 29 23:31:41.020893 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:31:41.020903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:31:41.020913 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 23:31:41.020924 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 23:31:41.020934 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 23:31:41.020945 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 23:31:41.020955 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:31:41.020965 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:31:41.020976 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:31:41.020986 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:31:41.020996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 23:31:41.021007 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 23:31:41.021018 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 23:31:41.021039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:31:41.021051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:31:41.021061 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:31:41.021071 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 23:31:41.021081 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 23:31:41.021091 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 23:31:41.021101 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 23:31:41.021110 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 23:31:41.021122 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 23:31:41.021132 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 23:31:41.021143 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 23:31:41.021157 systemd[1]: Reached target machines.target - Containers. Oct 29 23:31:41.021167 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 23:31:41.021177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:31:41.021193 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:31:41.021208 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:31:41.021219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:31:41.021229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:31:41.021249 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:31:41.021261 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:31:41.021270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:31:41.021281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 23:31:41.021293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 23:31:41.021302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 23:31:41.021312 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 23:31:41.021324 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 23:31:41.021335 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:31:41.021345 kernel: fuse: init (API version 7.41) Oct 29 23:31:41.021355 kernel: loop: module loaded Oct 29 23:31:41.021364 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:31:41.021374 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:31:41.021385 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:31:41.021394 kernel: ACPI: bus type drm_connector registered Oct 29 23:31:41.021405 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 23:31:41.021416 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 23:31:41.021427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:31:41.021437 systemd[1]: verity-setup.service: Deactivated successfully. Oct 29 23:31:41.021447 systemd[1]: Stopped verity-setup.service. Oct 29 23:31:41.021462 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 23:31:41.021471 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 23:31:41.021509 systemd-journald[1149]: Collecting audit messages is disabled. Oct 29 23:31:41.021531 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 23:31:41.021542 systemd-journald[1149]: Journal started Oct 29 23:31:41.021562 systemd-journald[1149]: Runtime Journal (/run/log/journal/c51f10cc45464405bd6b462cfd574ab1) is 6M, max 48.5M, 42.4M free. Oct 29 23:31:40.789349 systemd[1]: Queued start job for default target multi-user.target. Oct 29 23:31:40.801306 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 29 23:31:40.801725 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 23:31:41.024512 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:31:41.025285 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 23:31:41.026553 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 23:31:41.027811 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 23:31:41.030064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:31:41.032048 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:31:41.032464 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:31:41.034024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:31:41.034365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:31:41.035966 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 23:31:41.037729 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:31:41.038069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:31:41.039679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:31:41.039854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:31:41.041505 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:31:41.041675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:31:41.043134 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:31:41.043314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:31:41.044694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:31:41.046222 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:31:41.047864 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 23:31:41.049450 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 23:31:41.061943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:31:41.063937 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:31:41.066299 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 23:31:41.068403 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 23:31:41.069628 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 23:31:41.069670 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:31:41.071701 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 23:31:41.076313 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 23:31:41.077939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:31:41.079391 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 23:31:41.081484 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 23:31:41.082916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:31:41.083961 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 23:31:41.085298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:31:41.088208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:31:41.091464 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 23:31:41.094783 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 23:31:41.098214 systemd-journald[1149]: Time spent on flushing to /var/log/journal/c51f10cc45464405bd6b462cfd574ab1 is 12.648ms for 884 entries. Oct 29 23:31:41.098214 systemd-journald[1149]: System Journal (/var/log/journal/c51f10cc45464405bd6b462cfd574ab1) is 8M, max 195.6M, 187.6M free. Oct 29 23:31:41.121446 systemd-journald[1149]: Received client request to flush runtime journal. Oct 29 23:31:41.098912 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 23:31:41.104753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 23:31:41.108933 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 23:31:41.112935 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 23:31:41.116463 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 23:31:41.118921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:31:41.123282 kernel: loop0: detected capacity change from 0 to 100632 Oct 29 23:31:41.128506 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 23:31:41.132412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 23:31:41.145212 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 23:31:41.148519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:31:41.153487 kernel: loop1: detected capacity change from 0 to 119368 Oct 29 23:31:41.153671 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 23:31:41.156252 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 23:31:41.176459 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 29 23:31:41.176477 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Oct 29 23:31:41.179653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:31:41.186490 kernel: loop2: detected capacity change from 0 to 211168 Oct 29 23:31:41.214278 kernel: loop3: detected capacity change from 0 to 100632 Oct 29 23:31:41.221286 kernel: loop4: detected capacity change from 0 to 119368 Oct 29 23:31:41.228266 kernel: loop5: detected capacity change from 0 to 211168 Oct 29 23:31:41.234340 (sd-merge)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 29 23:31:41.234762 (sd-merge)[1231]: Merged extensions into '/usr'. Oct 29 23:31:41.240317 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 23:31:41.240334 systemd[1]: Reloading... Oct 29 23:31:41.279326 zram_generator::config[1255]: No configuration found. Oct 29 23:31:41.367928 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 23:31:41.439897 systemd[1]: Reloading finished in 199 ms. Oct 29 23:31:41.465961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 23:31:41.467719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 23:31:41.488665 systemd[1]: Starting ensure-sysext.service... Oct 29 23:31:41.490945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:31:41.500407 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Oct 29 23:31:41.500537 systemd[1]: Reloading... Oct 29 23:31:41.504394 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 23:31:41.504425 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 23:31:41.504634 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 23:31:41.504819 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 23:31:41.505429 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 23:31:41.505628 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Oct 29 23:31:41.505675 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Oct 29 23:31:41.508353 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:31:41.508367 systemd-tmpfiles[1293]: Skipping /boot Oct 29 23:31:41.514148 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:31:41.514164 systemd-tmpfiles[1293]: Skipping /boot Oct 29 23:31:41.542267 zram_generator::config[1320]: No configuration found. Oct 29 23:31:41.676510 systemd[1]: Reloading finished in 175 ms. Oct 29 23:31:41.697938 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 23:31:41.704080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:31:41.716408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:31:41.719332 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 23:31:41.722231 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 23:31:41.725724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:31:41.729300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:31:41.732442 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 23:31:41.742008 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 23:31:41.747279 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 23:31:41.754634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:31:41.757207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:31:41.761748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:31:41.766861 systemd-udevd[1366]: Using default interface naming scheme 'v255'. Oct 29 23:31:41.769186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:31:41.770613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:31:41.770783 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:31:41.772605 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 23:31:41.775818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:31:41.776887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:31:41.783438 augenrules[1387]: No rules Oct 29 23:31:41.783955 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 23:31:41.786219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:31:41.786401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:31:41.789969 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:31:41.790184 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:31:41.791933 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:31:41.792117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:31:41.795727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:31:41.799047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 23:31:41.804884 systemd[1]: Finished ensure-sysext.service. Oct 29 23:31:41.806411 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 23:31:41.814794 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 23:31:41.828821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:31:41.830687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:31:41.835507 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:31:41.837561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:31:41.837605 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:31:41.857564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:31:41.860415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:31:41.872991 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 29 23:31:41.874928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 23:31:41.883744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:31:41.884496 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:31:41.887229 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:31:41.887436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:31:41.893919 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 29 23:31:41.914172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:31:41.917798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 29 23:31:41.922375 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 23:31:41.956470 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 23:31:41.987586 systemd-resolved[1360]: Positive Trust Anchors: Oct 29 23:31:41.987881 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:31:41.987918 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:31:41.991938 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 29 23:31:41.993491 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 23:31:41.995770 systemd-resolved[1360]: Defaulting to hostname 'linux'. Oct 29 23:31:41.996265 systemd-networkd[1435]: lo: Link UP Oct 29 23:31:41.996273 systemd-networkd[1435]: lo: Gained carrier Oct 29 23:31:41.997147 systemd-networkd[1435]: Enumeration completed Oct 29 23:31:41.997273 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:31:41.997592 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:31:41.997603 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:31:41.998232 systemd-networkd[1435]: eth0: Link UP Oct 29 23:31:41.998372 systemd-networkd[1435]: eth0: Gained carrier Oct 29 23:31:41.998393 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:31:41.998664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:31:42.000630 systemd[1]: Reached target network.target - Network. Oct 29 23:31:42.001624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:31:42.003393 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:31:42.004570 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 23:31:42.006428 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 23:31:42.007924 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 23:31:42.010578 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 23:31:42.011994 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 23:31:42.013416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 23:31:42.013448 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:31:42.014368 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:31:42.016277 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 23:31:42.018303 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 23:31:42.019677 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 23:31:42.024657 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 23:31:42.026228 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 23:31:42.027542 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 23:31:42.029957 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Oct 29 23:31:42.030634 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 23:31:42.030685 systemd-timesyncd[1437]: Initial clock synchronization to Wed 2025-10-29 23:31:42.381033 UTC. Oct 29 23:31:42.031078 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 23:31:42.032531 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 23:31:42.034955 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 23:31:42.042293 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 23:31:42.044986 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 23:31:42.051724 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:31:42.052822 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:31:42.053897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:31:42.053937 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:31:42.063067 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 23:31:42.065307 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 23:31:42.067327 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 23:31:42.070540 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 23:31:42.072670 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 23:31:42.074004 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 23:31:42.075008 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 23:31:42.078348 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 23:31:42.079230 jq[1472]: false Oct 29 23:31:42.081826 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 23:31:42.084438 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 23:31:42.088414 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 23:31:42.090532 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 23:31:42.091002 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 23:31:42.092384 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 23:31:42.094424 extend-filesystems[1473]: Found /dev/vda6 Oct 29 23:31:42.095674 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 23:31:42.098338 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 23:31:42.102922 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 23:31:42.103095 jq[1488]: true Oct 29 23:31:42.103446 extend-filesystems[1473]: Found /dev/vda9 Oct 29 23:31:42.104875 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 23:31:42.105446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 23:31:42.105749 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 23:31:42.105926 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 23:31:42.107755 extend-filesystems[1473]: Checking size of /dev/vda9 Oct 29 23:31:42.109462 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 23:31:42.113929 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 23:31:42.119340 extend-filesystems[1473]: Resized partition /dev/vda9 Oct 29 23:31:42.122914 extend-filesystems[1502]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 23:31:42.133406 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 29 23:31:42.137946 update_engine[1487]: I20251029 23:31:42.137686 1487 main.cc:92] Flatcar Update Engine starting Oct 29 23:31:42.138956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:31:42.142417 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 23:31:42.145677 tar[1496]: linux-arm64/LICENSE Oct 29 23:31:42.145677 tar[1496]: linux-arm64/helm Oct 29 23:31:42.147119 jq[1499]: true Oct 29 23:31:42.174497 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 23:31:42.175811 systemd-logind[1486]: New seat seat0. Oct 29 23:31:42.178461 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 23:31:42.186092 dbus-daemon[1470]: [system] SELinux support is enabled Oct 29 23:31:42.186390 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 23:31:42.190069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 23:31:42.190108 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 23:31:42.192321 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 23:31:42.192344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 23:31:42.193997 update_engine[1487]: I20251029 23:31:42.193940 1487 update_check_scheduler.cc:74] Next update check in 8m24s Oct 29 23:31:42.196846 systemd[1]: Started update-engine.service - Update Engine. Oct 29 23:31:42.196905 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 29 23:31:42.200257 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 29 23:31:42.204734 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 23:31:42.220874 extend-filesystems[1502]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 23:31:42.220874 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 23:31:42.220874 extend-filesystems[1502]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 29 23:31:42.233098 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Oct 29 23:31:42.236361 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:31:42.222375 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 23:31:42.223473 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 23:31:42.260330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 23:31:42.267748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:31:42.278307 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 29 23:31:42.303951 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 23:31:42.317114 containerd[1503]: time="2025-10-29T23:31:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 23:31:42.319276 containerd[1503]: time="2025-10-29T23:31:42.317815480Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 23:31:42.326325 containerd[1503]: time="2025-10-29T23:31:42.326278600Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.88µs" Oct 29 23:31:42.326325 containerd[1503]: time="2025-10-29T23:31:42.326317720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 23:31:42.326325 containerd[1503]: time="2025-10-29T23:31:42.326336320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 23:31:42.326529 containerd[1503]: time="2025-10-29T23:31:42.326509280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 23:31:42.326553 containerd[1503]: time="2025-10-29T23:31:42.326534200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 23:31:42.326573 containerd[1503]: time="2025-10-29T23:31:42.326560480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326623 containerd[1503]: time="2025-10-29T23:31:42.326608600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326643 containerd[1503]: time="2025-10-29T23:31:42.326622280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326885 containerd[1503]: time="2025-10-29T23:31:42.326845160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326885 containerd[1503]: time="2025-10-29T23:31:42.326866920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326885 containerd[1503]: time="2025-10-29T23:31:42.326878560Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326885 containerd[1503]: time="2025-10-29T23:31:42.326885960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 23:31:42.326962 containerd[1503]: time="2025-10-29T23:31:42.326951360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 23:31:42.327159 containerd[1503]: time="2025-10-29T23:31:42.327138120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:31:42.327185 containerd[1503]: time="2025-10-29T23:31:42.327173120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:31:42.327203 containerd[1503]: time="2025-10-29T23:31:42.327183560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 23:31:42.327226 containerd[1503]: time="2025-10-29T23:31:42.327211800Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 23:31:42.327478 containerd[1503]: time="2025-10-29T23:31:42.327461800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 23:31:42.327546 containerd[1503]: time="2025-10-29T23:31:42.327531960Z" level=info msg="metadata content store policy set" policy=shared Oct 29 23:31:42.330964 containerd[1503]: time="2025-10-29T23:31:42.330911440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 23:31:42.331091 containerd[1503]: time="2025-10-29T23:31:42.330985160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 23:31:42.331091 containerd[1503]: time="2025-10-29T23:31:42.331003480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 23:31:42.331091 containerd[1503]: time="2025-10-29T23:31:42.331027600Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 23:31:42.331091 containerd[1503]: time="2025-10-29T23:31:42.331072880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 23:31:42.331091 containerd[1503]: time="2025-10-29T23:31:42.331087840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331101280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331113840Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331126840Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331137160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331145960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 23:31:42.331186 containerd[1503]: time="2025-10-29T23:31:42.331158400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 23:31:42.331330 containerd[1503]: time="2025-10-29T23:31:42.331308280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 23:31:42.331354 containerd[1503]: time="2025-10-29T23:31:42.331336120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 23:31:42.331372 containerd[1503]: time="2025-10-29T23:31:42.331360680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 23:31:42.331389 containerd[1503]: time="2025-10-29T23:31:42.331373000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 23:31:42.331389 containerd[1503]: time="2025-10-29T23:31:42.331384320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 23:31:42.331424 containerd[1503]: time="2025-10-29T23:31:42.331394880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 23:31:42.331424 containerd[1503]: time="2025-10-29T23:31:42.331406360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 23:31:42.331424 containerd[1503]: time="2025-10-29T23:31:42.331416320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 23:31:42.331475 containerd[1503]: time="2025-10-29T23:31:42.331427240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 23:31:42.331475 containerd[1503]: time="2025-10-29T23:31:42.331438920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 23:31:42.331475 containerd[1503]: time="2025-10-29T23:31:42.331449720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 23:31:42.331670 containerd[1503]: time="2025-10-29T23:31:42.331653960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 23:31:42.331692 containerd[1503]: time="2025-10-29T23:31:42.331676960Z" level=info msg="Start snapshots syncer" Oct 29 23:31:42.331722 containerd[1503]: time="2025-10-29T23:31:42.331704800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 23:31:42.331975 containerd[1503]: time="2025-10-29T23:31:42.331931520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 23:31:42.332101 containerd[1503]: time="2025-10-29T23:31:42.331993680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 23:31:42.332101 containerd[1503]: time="2025-10-29T23:31:42.332079720Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 23:31:42.332265 containerd[1503]: time="2025-10-29T23:31:42.332209760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 23:31:42.332265 containerd[1503]: time="2025-10-29T23:31:42.332262360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 23:31:42.332310 containerd[1503]: time="2025-10-29T23:31:42.332274960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 23:31:42.332310 containerd[1503]: time="2025-10-29T23:31:42.332290120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 23:31:42.332354 containerd[1503]: time="2025-10-29T23:31:42.332309680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 23:31:42.332354 containerd[1503]: time="2025-10-29T23:31:42.332321480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 23:31:42.332354 containerd[1503]: time="2025-10-29T23:31:42.332332520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 23:31:42.332401 containerd[1503]: time="2025-10-29T23:31:42.332358200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 23:31:42.332401 containerd[1503]: time="2025-10-29T23:31:42.332369520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 23:31:42.332401 containerd[1503]: time="2025-10-29T23:31:42.332380680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 23:31:42.332450 containerd[1503]: time="2025-10-29T23:31:42.332421640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:31:42.332450 containerd[1503]: time="2025-10-29T23:31:42.332436520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:31:42.332450 containerd[1503]: time="2025-10-29T23:31:42.332444560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:31:42.332497 containerd[1503]: time="2025-10-29T23:31:42.332453720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:31:42.332497 containerd[1503]: time="2025-10-29T23:31:42.332461640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 23:31:42.332626 containerd[1503]: time="2025-10-29T23:31:42.332604120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 23:31:42.332656 containerd[1503]: time="2025-10-29T23:31:42.332631080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 23:31:42.332720 containerd[1503]: time="2025-10-29T23:31:42.332709160Z" level=info msg="runtime interface created" Oct 29 23:31:42.332720 containerd[1503]: time="2025-10-29T23:31:42.332716560Z" level=info msg="created NRI interface" Oct 29 23:31:42.332757 containerd[1503]: time="2025-10-29T23:31:42.332725320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 23:31:42.332757 containerd[1503]: time="2025-10-29T23:31:42.332737400Z" level=info msg="Connect containerd service" Oct 29 23:31:42.332789 containerd[1503]: time="2025-10-29T23:31:42.332769120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 23:31:42.333586 containerd[1503]: time="2025-10-29T23:31:42.333557440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:31:42.402517 containerd[1503]: time="2025-10-29T23:31:42.402435720Z" level=info msg="Start subscribing containerd event" Oct 29 23:31:42.402617 containerd[1503]: time="2025-10-29T23:31:42.402533200Z" level=info msg="Start recovering state" Oct 29 23:31:42.402636 containerd[1503]: time="2025-10-29T23:31:42.402622720Z" level=info msg="Start event monitor" Oct 29 23:31:42.402654 containerd[1503]: time="2025-10-29T23:31:42.402636800Z" level=info msg="Start cni network conf syncer for default" Oct 29 23:31:42.402654 containerd[1503]: time="2025-10-29T23:31:42.402644560Z" level=info msg="Start streaming server" Oct 29 23:31:42.402716 containerd[1503]: time="2025-10-29T23:31:42.402655920Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 23:31:42.402716 containerd[1503]: time="2025-10-29T23:31:42.402663640Z" level=info msg="runtime interface starting up..." Oct 29 23:31:42.402716 containerd[1503]: time="2025-10-29T23:31:42.402668960Z" level=info msg="starting plugins..." Oct 29 23:31:42.402716 containerd[1503]: time="2025-10-29T23:31:42.402683360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 23:31:42.402788 containerd[1503]: time="2025-10-29T23:31:42.402742760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 23:31:42.402806 containerd[1503]: time="2025-10-29T23:31:42.402790120Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 23:31:42.402886 containerd[1503]: time="2025-10-29T23:31:42.402849760Z" level=info msg="containerd successfully booted in 0.086120s" Oct 29 23:31:42.402969 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 23:31:42.497944 tar[1496]: linux-arm64/README.md Oct 29 23:31:42.515956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 23:31:43.303328 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 23:31:43.323657 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 23:31:43.326534 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 23:31:43.354094 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 23:31:43.354342 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 23:31:43.357065 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 23:31:43.388464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 23:31:43.391382 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 23:31:43.393785 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 29 23:31:43.395332 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 23:31:43.962421 systemd-networkd[1435]: eth0: Gained IPv6LL Oct 29 23:31:43.964244 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 23:31:43.969081 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 23:31:43.971903 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 29 23:31:43.975856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:43.986336 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 23:31:44.010626 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 23:31:44.012597 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 29 23:31:44.012806 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 29 23:31:44.015090 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 23:31:44.571414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:44.573029 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 23:31:44.575208 systemd[1]: Startup finished in 2.055s (kernel) + 4.782s (initrd) + 4.239s (userspace) = 11.076s. Oct 29 23:31:44.575314 (kubelet)[1611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:31:44.947228 kubelet[1611]: E1029 23:31:44.947114 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:31:44.949929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:31:44.950179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:31:44.950843 systemd[1]: kubelet.service: Consumed 754ms CPU time, 258.7M memory peak. Oct 29 23:31:48.904949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 23:31:48.906248 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:52580.service - OpenSSH per-connection server daemon (10.0.0.1:52580). Oct 29 23:31:48.979532 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 52580 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:48.981797 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:48.988169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 23:31:48.989097 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 23:31:48.995767 systemd-logind[1486]: New session 1 of user core. Oct 29 23:31:49.010085 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 23:31:49.012599 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 23:31:49.028341 (systemd)[1629]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 23:31:49.030904 systemd-logind[1486]: New session c1 of user core. Oct 29 23:31:49.142478 systemd[1629]: Queued start job for default target default.target. Oct 29 23:31:49.155329 systemd[1629]: Created slice app.slice - User Application Slice. Oct 29 23:31:49.155358 systemd[1629]: Reached target paths.target - Paths. Oct 29 23:31:49.155397 systemd[1629]: Reached target timers.target - Timers. Oct 29 23:31:49.156608 systemd[1629]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 23:31:49.166516 systemd[1629]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 23:31:49.166660 systemd[1629]: Reached target sockets.target - Sockets. Oct 29 23:31:49.166775 systemd[1629]: Reached target basic.target - Basic System. Oct 29 23:31:49.166902 systemd[1629]: Reached target default.target - Main User Target. Oct 29 23:31:49.166941 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 23:31:49.167028 systemd[1629]: Startup finished in 130ms. Oct 29 23:31:49.168027 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 23:31:49.234199 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:34840.service - OpenSSH per-connection server daemon (10.0.0.1:34840). Oct 29 23:31:49.287465 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 34840 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:49.288789 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:49.292699 systemd-logind[1486]: New session 2 of user core. Oct 29 23:31:49.298432 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 23:31:49.351677 sshd[1643]: Connection closed by 10.0.0.1 port 34840 Oct 29 23:31:49.352141 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:49.363233 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:34840.service: Deactivated successfully. Oct 29 23:31:49.365435 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 23:31:49.367217 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Oct 29 23:31:49.368532 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Oct 29 23:31:49.369451 systemd-logind[1486]: Removed session 2. Oct 29 23:31:49.425005 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:49.426300 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:49.430345 systemd-logind[1486]: New session 3 of user core. Oct 29 23:31:49.439537 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 23:31:49.489015 sshd[1652]: Connection closed by 10.0.0.1 port 34856 Oct 29 23:31:49.489315 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:49.499388 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:34856.service: Deactivated successfully. Oct 29 23:31:49.501630 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 23:31:49.502315 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Oct 29 23:31:49.504468 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:34860.service - OpenSSH per-connection server daemon (10.0.0.1:34860). Oct 29 23:31:49.505409 systemd-logind[1486]: Removed session 3. Oct 29 23:31:49.558743 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 34860 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:49.560056 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:49.564591 systemd-logind[1486]: New session 4 of user core. Oct 29 23:31:49.579431 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 23:31:49.634029 sshd[1661]: Connection closed by 10.0.0.1 port 34860 Oct 29 23:31:49.633896 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:49.650396 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:34860.service: Deactivated successfully. Oct 29 23:31:49.651859 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 23:31:49.652519 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Oct 29 23:31:49.654427 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Oct 29 23:31:49.655460 systemd-logind[1486]: Removed session 4. Oct 29 23:31:49.710105 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:49.711475 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:49.715125 systemd-logind[1486]: New session 5 of user core. Oct 29 23:31:49.725437 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 23:31:49.782139 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 23:31:49.782430 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:31:49.795204 sudo[1671]: pam_unix(sudo:session): session closed for user root Oct 29 23:31:49.798041 sshd[1670]: Connection closed by 10.0.0.1 port 34868 Oct 29 23:31:49.797848 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:49.808346 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:34868.service: Deactivated successfully. Oct 29 23:31:49.809923 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 23:31:49.811863 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Oct 29 23:31:49.814139 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:34882.service - OpenSSH per-connection server daemon (10.0.0.1:34882). Oct 29 23:31:49.815187 systemd-logind[1486]: Removed session 5. Oct 29 23:31:49.867664 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 34882 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:49.868992 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:49.873541 systemd-logind[1486]: New session 6 of user core. Oct 29 23:31:49.883443 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 23:31:49.936643 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 23:31:49.937234 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:31:50.017784 sudo[1682]: pam_unix(sudo:session): session closed for user root Oct 29 23:31:50.023018 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 23:31:50.023315 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:31:50.031452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:31:50.063151 augenrules[1704]: No rules Oct 29 23:31:50.064418 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:31:50.065371 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:31:50.066309 sudo[1681]: pam_unix(sudo:session): session closed for user root Oct 29 23:31:50.067607 sshd[1680]: Connection closed by 10.0.0.1 port 34882 Oct 29 23:31:50.068016 sshd-session[1677]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:50.079335 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:34882.service: Deactivated successfully. Oct 29 23:31:50.081764 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 23:31:50.082560 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Oct 29 23:31:50.085095 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Oct 29 23:31:50.085942 systemd-logind[1486]: Removed session 6. Oct 29 23:31:50.139132 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:31:50.140351 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:31:50.144764 systemd-logind[1486]: New session 7 of user core. Oct 29 23:31:50.153430 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 23:31:50.205288 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 23:31:50.205555 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:31:50.501501 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 23:31:50.521650 (dockerd)[1737]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 23:31:50.726605 dockerd[1737]: time="2025-10-29T23:31:50.726517114Z" level=info msg="Starting up" Oct 29 23:31:50.727511 dockerd[1737]: time="2025-10-29T23:31:50.727412580Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 23:31:50.738369 dockerd[1737]: time="2025-10-29T23:31:50.738331743Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 23:31:50.771727 dockerd[1737]: time="2025-10-29T23:31:50.771431015Z" level=info msg="Loading containers: start." Oct 29 23:31:50.781267 kernel: Initializing XFRM netlink socket Oct 29 23:31:50.982156 systemd-networkd[1435]: docker0: Link UP Oct 29 23:31:50.985790 dockerd[1737]: time="2025-10-29T23:31:50.985741444Z" level=info msg="Loading containers: done." Oct 29 23:31:51.004820 dockerd[1737]: time="2025-10-29T23:31:51.004761358Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 23:31:51.004973 dockerd[1737]: time="2025-10-29T23:31:51.004851900Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 23:31:51.004973 dockerd[1737]: time="2025-10-29T23:31:51.004934160Z" level=info msg="Initializing buildkit" Oct 29 23:31:51.026898 dockerd[1737]: time="2025-10-29T23:31:51.026535731Z" level=info msg="Completed buildkit initialization" Oct 29 23:31:51.031874 dockerd[1737]: time="2025-10-29T23:31:51.031828588Z" level=info msg="Daemon has completed initialization" Oct 29 23:31:51.032088 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 23:31:51.032730 dockerd[1737]: time="2025-10-29T23:31:51.032579322Z" level=info msg="API listen on /run/docker.sock" Oct 29 23:31:51.652241 containerd[1503]: time="2025-10-29T23:31:51.651883298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 29 23:31:52.517498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971965144.mount: Deactivated successfully. Oct 29 23:31:53.596373 containerd[1503]: time="2025-10-29T23:31:53.596302034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:53.596938 containerd[1503]: time="2025-10-29T23:31:53.596897337Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Oct 29 23:31:53.597763 containerd[1503]: time="2025-10-29T23:31:53.597712829Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:53.600350 containerd[1503]: time="2025-10-29T23:31:53.600275308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:53.601336 containerd[1503]: time="2025-10-29T23:31:53.601305324Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.94937776s" Oct 29 23:31:53.601544 containerd[1503]: time="2025-10-29T23:31:53.601417887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 29 23:31:53.603154 containerd[1503]: time="2025-10-29T23:31:53.603118747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 29 23:31:54.758494 containerd[1503]: time="2025-10-29T23:31:54.758437563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:54.759363 containerd[1503]: time="2025-10-29T23:31:54.759326923Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Oct 29 23:31:54.760711 containerd[1503]: time="2025-10-29T23:31:54.760684922Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:54.763091 containerd[1503]: time="2025-10-29T23:31:54.763047874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:54.764256 containerd[1503]: time="2025-10-29T23:31:54.764139736Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.160980417s" Oct 29 23:31:54.764256 containerd[1503]: time="2025-10-29T23:31:54.764169392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 29 23:31:54.764893 containerd[1503]: time="2025-10-29T23:31:54.764817180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 29 23:31:55.200555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 23:31:55.202231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:55.337618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:55.341264 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:31:55.382262 kubelet[2026]: E1029 23:31:55.380789 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:31:55.384914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:31:55.385043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:31:55.387339 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.3M memory peak. Oct 29 23:31:55.997608 containerd[1503]: time="2025-10-29T23:31:55.997548614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:55.998169 containerd[1503]: time="2025-10-29T23:31:55.998129939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Oct 29 23:31:55.999106 containerd[1503]: time="2025-10-29T23:31:55.999054264Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:56.002297 containerd[1503]: time="2025-10-29T23:31:56.002225298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:56.003716 containerd[1503]: time="2025-10-29T23:31:56.003686615Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.238715436s" Oct 29 23:31:56.003768 containerd[1503]: time="2025-10-29T23:31:56.003721362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 29 23:31:56.004229 containerd[1503]: time="2025-10-29T23:31:56.004200594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 29 23:31:56.970721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418977197.mount: Deactivated successfully. Oct 29 23:31:57.246399 containerd[1503]: time="2025-10-29T23:31:57.246276639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:57.247445 containerd[1503]: time="2025-10-29T23:31:57.247404085Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Oct 29 23:31:57.248182 containerd[1503]: time="2025-10-29T23:31:57.248139500Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:57.250290 containerd[1503]: time="2025-10-29T23:31:57.250256708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:57.251002 containerd[1503]: time="2025-10-29T23:31:57.250962042Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.246728443s" Oct 29 23:31:57.251034 containerd[1503]: time="2025-10-29T23:31:57.250999413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 29 23:31:57.251546 containerd[1503]: time="2025-10-29T23:31:57.251507784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 29 23:31:57.727515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282052472.mount: Deactivated successfully. Oct 29 23:31:58.646169 containerd[1503]: time="2025-10-29T23:31:58.646117013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.647147 containerd[1503]: time="2025-10-29T23:31:58.646902912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Oct 29 23:31:58.647957 containerd[1503]: time="2025-10-29T23:31:58.647924073Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.651305 containerd[1503]: time="2025-10-29T23:31:58.650614404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.652424 containerd[1503]: time="2025-10-29T23:31:58.652384648Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.400844575s" Oct 29 23:31:58.652424 containerd[1503]: time="2025-10-29T23:31:58.652421746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 29 23:31:58.653150 containerd[1503]: time="2025-10-29T23:31:58.653127413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 23:31:59.078768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818680265.mount: Deactivated successfully. Oct 29 23:31:59.083495 containerd[1503]: time="2025-10-29T23:31:59.083445233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:59.084412 containerd[1503]: time="2025-10-29T23:31:59.084376103Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 29 23:31:59.086639 containerd[1503]: time="2025-10-29T23:31:59.085575716Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:59.090032 containerd[1503]: time="2025-10-29T23:31:59.089996748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:31:59.090796 containerd[1503]: time="2025-10-29T23:31:59.090757945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 437.601648ms" Oct 29 23:31:59.090796 containerd[1503]: time="2025-10-29T23:31:59.090791236Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 29 23:31:59.091223 containerd[1503]: time="2025-10-29T23:31:59.091198210Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 29 23:31:59.569826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314974864.mount: Deactivated successfully. Oct 29 23:32:01.498273 containerd[1503]: time="2025-10-29T23:32:01.497886889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:01.498596 containerd[1503]: time="2025-10-29T23:32:01.498500992Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Oct 29 23:32:01.499710 containerd[1503]: time="2025-10-29T23:32:01.499669361Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:01.502703 containerd[1503]: time="2025-10-29T23:32:01.502669395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:01.504624 containerd[1503]: time="2025-10-29T23:32:01.504496121Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.412790028s" Oct 29 23:32:01.504624 containerd[1503]: time="2025-10-29T23:32:01.504532223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 29 23:32:05.635835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 23:32:05.637266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:32:05.770784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:05.778502 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:32:05.810747 kubelet[2191]: E1029 23:32:05.810682 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:32:05.813377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:32:05.813600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:32:05.814047 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107.2M memory peak. Oct 29 23:32:07.413653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:07.414129 systemd[1]: kubelet.service: Consumed 133ms CPU time, 107.2M memory peak. Oct 29 23:32:07.416176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:32:07.437036 systemd[1]: Reload requested from client PID 2208 ('systemctl') (unit session-7.scope)... Oct 29 23:32:07.437052 systemd[1]: Reloading... Oct 29 23:32:07.500267 zram_generator::config[2251]: No configuration found. Oct 29 23:32:07.679687 systemd[1]: Reloading finished in 242 ms. Oct 29 23:32:07.745853 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 23:32:07.745946 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 23:32:07.746280 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:07.746334 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Oct 29 23:32:07.748013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:32:07.861782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:07.865853 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:32:07.898292 kubelet[2296]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:32:07.898292 kubelet[2296]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:32:07.898292 kubelet[2296]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:32:07.898624 kubelet[2296]: I1029 23:32:07.898361 2296 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:32:08.615277 kubelet[2296]: I1029 23:32:08.613971 2296 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 23:32:08.615277 kubelet[2296]: I1029 23:32:08.614010 2296 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:32:08.615277 kubelet[2296]: I1029 23:32:08.614278 2296 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 23:32:08.643644 kubelet[2296]: E1029 23:32:08.643586 2296 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 23:32:08.648281 kubelet[2296]: I1029 23:32:08.648229 2296 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:32:08.655070 kubelet[2296]: I1029 23:32:08.655048 2296 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:32:08.659263 kubelet[2296]: I1029 23:32:08.659161 2296 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 23:32:08.661501 kubelet[2296]: I1029 23:32:08.661459 2296 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:32:08.661758 kubelet[2296]: I1029 23:32:08.661583 2296 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:32:08.661946 kubelet[2296]: I1029 23:32:08.661933 2296 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:32:08.661991 kubelet[2296]: I1029 23:32:08.661984 2296 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 23:32:08.662769 kubelet[2296]: I1029 23:32:08.662751 2296 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:32:08.665368 kubelet[2296]: I1029 23:32:08.665338 2296 kubelet.go:480] "Attempting to sync node with API server" Oct 29 23:32:08.665449 kubelet[2296]: I1029 23:32:08.665439 2296 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:32:08.665512 kubelet[2296]: I1029 23:32:08.665503 2296 kubelet.go:386] "Adding apiserver pod source" Oct 29 23:32:08.665558 kubelet[2296]: I1029 23:32:08.665550 2296 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:32:08.667133 kubelet[2296]: E1029 23:32:08.667073 2296 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 23:32:08.667204 kubelet[2296]: I1029 23:32:08.667170 2296 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:32:08.668117 kubelet[2296]: E1029 23:32:08.668092 2296 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 23:32:08.668405 kubelet[2296]: I1029 23:32:08.668384 2296 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 23:32:08.668518 kubelet[2296]: W1029 23:32:08.668505 2296 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 23:32:08.673490 kubelet[2296]: I1029 23:32:08.673455 2296 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 23:32:08.673544 kubelet[2296]: I1029 23:32:08.673514 2296 server.go:1289] "Started kubelet" Oct 29 23:32:08.676066 kubelet[2296]: I1029 23:32:08.676026 2296 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:32:08.678316 kubelet[2296]: I1029 23:32:08.678297 2296 server.go:317] "Adding debug handlers to kubelet server" Oct 29 23:32:08.680398 kubelet[2296]: I1029 23:32:08.678871 2296 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:32:08.681286 kubelet[2296]: I1029 23:32:08.681108 2296 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:32:08.681348 kubelet[2296]: I1029 23:32:08.679007 2296 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:32:08.681450 kubelet[2296]: I1029 23:32:08.681406 2296 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:32:08.681607 kubelet[2296]: I1029 23:32:08.681589 2296 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 23:32:08.681728 kubelet[2296]: E1029 23:32:08.681711 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:32:08.682519 kubelet[2296]: I1029 23:32:08.682495 2296 factory.go:223] Registration of the systemd container factory successfully Oct 29 23:32:08.682690 kubelet[2296]: I1029 23:32:08.682670 2296 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:32:08.682868 kubelet[2296]: E1029 23:32:08.682829 2296 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 23:32:08.682868 kubelet[2296]: E1029 23:32:08.682700 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Oct 29 23:32:08.683263 kubelet[2296]: I1029 23:32:08.682498 2296 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 23:32:08.683354 kubelet[2296]: I1029 23:32:08.682552 2296 reconciler.go:26] "Reconciler: start to sync state" Oct 29 23:32:08.684183 kubelet[2296]: E1029 23:32:08.684132 2296 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 23:32:08.684271 kubelet[2296]: I1029 23:32:08.684156 2296 factory.go:223] Registration of the containerd container factory successfully Oct 29 23:32:08.684638 kubelet[2296]: E1029 23:32:08.682760 2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731a3d08b626b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 23:32:08.673478327 +0000 UTC m=+0.804133231,LastTimestamp:2025-10-29 23:32:08.673478327 +0000 UTC m=+0.804133231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 23:32:08.694279 kubelet[2296]: I1029 23:32:08.693963 2296 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:32:08.694279 kubelet[2296]: I1029 23:32:08.693981 2296 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:32:08.694279 kubelet[2296]: I1029 23:32:08.693999 2296 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:32:08.698450 kubelet[2296]: I1029 23:32:08.698395 2296 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 23:32:08.699407 kubelet[2296]: I1029 23:32:08.699374 2296 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 23:32:08.699407 kubelet[2296]: I1029 23:32:08.699400 2296 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 23:32:08.699479 kubelet[2296]: I1029 23:32:08.699422 2296 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:32:08.699479 kubelet[2296]: I1029 23:32:08.699431 2296 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 23:32:08.699479 kubelet[2296]: E1029 23:32:08.699471 2296 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:32:08.701160 kubelet[2296]: E1029 23:32:08.701119 2296 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 23:32:08.781956 kubelet[2296]: E1029 23:32:08.781898 2296 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 23:32:08.782210 kubelet[2296]: I1029 23:32:08.782191 2296 policy_none.go:49] "None policy: Start" Oct 29 23:32:08.782315 kubelet[2296]: I1029 23:32:08.782302 2296 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 23:32:08.782371 kubelet[2296]: I1029 23:32:08.782361 2296 state_mem.go:35] "Initializing new in-memory state store" Oct 29 23:32:08.787086 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 23:32:08.798738 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 23:32:08.799596 kubelet[2296]: E1029 23:32:08.799526 2296 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 23:32:08.801445 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 23:32:08.819263 kubelet[2296]: E1029 23:32:08.819130 2296 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 23:32:08.819408 kubelet[2296]: I1029 23:32:08.819395 2296 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:32:08.819436 kubelet[2296]: I1029 23:32:08.819410 2296 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:32:08.820099 kubelet[2296]: I1029 23:32:08.820034 2296 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:32:08.821741 kubelet[2296]: E1029 23:32:08.821709 2296 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:32:08.821810 kubelet[2296]: E1029 23:32:08.821763 2296 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 23:32:08.884305 kubelet[2296]: E1029 23:32:08.883413 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Oct 29 23:32:08.922265 kubelet[2296]: I1029 23:32:08.922102 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:32:08.922746 kubelet[2296]: E1029 23:32:08.922549 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Oct 29 23:32:09.009987 systemd[1]: Created slice kubepods-burstable-pod1988f5634626e1d263bc833ae42b1329.slice - libcontainer container kubepods-burstable-pod1988f5634626e1d263bc833ae42b1329.slice. Oct 29 23:32:09.028060 kubelet[2296]: E1029 23:32:09.028021 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:09.032098 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 29 23:32:09.047640 kubelet[2296]: E1029 23:32:09.047448 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:09.050416 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 29 23:32:09.052058 kubelet[2296]: E1029 23:32:09.052018 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:09.085381 kubelet[2296]: I1029 23:32:09.085343 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:09.085381 kubelet[2296]: I1029 23:32:09.085378 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:09.085468 kubelet[2296]: I1029 23:32:09.085405 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:09.085468 kubelet[2296]: I1029 23:32:09.085425 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:09.085468 kubelet[2296]: I1029 23:32:09.085442 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:09.085468 kubelet[2296]: I1029 23:32:09.085456 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:09.085562 kubelet[2296]: I1029 23:32:09.085491 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:09.085562 kubelet[2296]: I1029 23:32:09.085522 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:09.085601 kubelet[2296]: I1029 23:32:09.085573 2296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 23:32:09.124667 kubelet[2296]: I1029 23:32:09.124624 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:32:09.125019 kubelet[2296]: E1029 23:32:09.124973 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Oct 29 23:32:09.284806 kubelet[2296]: E1029 23:32:09.284687 2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Oct 29 23:32:09.330201 containerd[1503]: time="2025-10-29T23:32:09.329914137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1988f5634626e1d263bc833ae42b1329,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:09.348651 containerd[1503]: time="2025-10-29T23:32:09.348604617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:09.353660 containerd[1503]: time="2025-10-29T23:32:09.353565719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:09.356111 containerd[1503]: time="2025-10-29T23:32:09.356050055Z" level=info msg="connecting to shim 5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec" address="unix:///run/containerd/s/2ec7970afbc40fa33f0688fec9e81c42e150550cc4a58c8164feec691feffbbd" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:09.389992 containerd[1503]: time="2025-10-29T23:32:09.389832845Z" level=info msg="connecting to shim a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908" address="unix:///run/containerd/s/24a79efdb1f46000fe9b8b2cc4542336dcb022a29cdd26d8ffc7411a505a87ea" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:09.393412 systemd[1]: Started cri-containerd-5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec.scope - libcontainer container 5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec. Oct 29 23:32:09.398880 containerd[1503]: time="2025-10-29T23:32:09.398730937Z" level=info msg="connecting to shim d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883" address="unix:///run/containerd/s/3649b7e35893e65675899d7135483b24cdcfb6b7d2cecb55b0b68573a2e3a6c7" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:09.423421 systemd[1]: Started cri-containerd-a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908.scope - libcontainer container a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908. Oct 29 23:32:09.428130 systemd[1]: Started cri-containerd-d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883.scope - libcontainer container d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883. Oct 29 23:32:09.440252 containerd[1503]: time="2025-10-29T23:32:09.440192683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1988f5634626e1d263bc833ae42b1329,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec\"" Oct 29 23:32:09.448527 containerd[1503]: time="2025-10-29T23:32:09.448481066Z" level=info msg="CreateContainer within sandbox \"5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 23:32:09.457776 containerd[1503]: time="2025-10-29T23:32:09.457732158Z" level=info msg="Container 1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:09.469310 containerd[1503]: time="2025-10-29T23:32:09.469269357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908\"" Oct 29 23:32:09.473622 containerd[1503]: time="2025-10-29T23:32:09.473577892Z" level=info msg="CreateContainer within sandbox \"5e440f7abe3210d9234d8cf23cafbc254a83eee5088e6a05199de5fa095182ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5\"" Oct 29 23:32:09.474441 containerd[1503]: time="2025-10-29T23:32:09.474201780Z" level=info msg="StartContainer for \"1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5\"" Oct 29 23:32:09.475731 containerd[1503]: time="2025-10-29T23:32:09.475693367Z" level=info msg="CreateContainer within sandbox \"a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 23:32:09.477088 containerd[1503]: time="2025-10-29T23:32:09.477053936Z" level=info msg="connecting to shim 1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5" address="unix:///run/containerd/s/2ec7970afbc40fa33f0688fec9e81c42e150550cc4a58c8164feec691feffbbd" protocol=ttrpc version=3 Oct 29 23:32:09.481503 containerd[1503]: time="2025-10-29T23:32:09.481464530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883\"" Oct 29 23:32:09.487980 containerd[1503]: time="2025-10-29T23:32:09.487198482Z" level=info msg="Container 65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:09.487980 containerd[1503]: time="2025-10-29T23:32:09.487409008Z" level=info msg="CreateContainer within sandbox \"d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 23:32:09.502632 systemd[1]: Started cri-containerd-1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5.scope - libcontainer container 1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5. Oct 29 23:32:09.503063 containerd[1503]: time="2025-10-29T23:32:09.503016418Z" level=info msg="CreateContainer within sandbox \"a115b20468593a7f96cf066f721ed507ee6be22d30329dfa0239ffd797cb1908\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7\"" Oct 29 23:32:09.504043 containerd[1503]: time="2025-10-29T23:32:09.504013373Z" level=info msg="StartContainer for \"65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7\"" Oct 29 23:32:09.504746 containerd[1503]: time="2025-10-29T23:32:09.504657328Z" level=info msg="Container a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:09.505878 containerd[1503]: time="2025-10-29T23:32:09.505844622Z" level=info msg="connecting to shim 65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7" address="unix:///run/containerd/s/24a79efdb1f46000fe9b8b2cc4542336dcb022a29cdd26d8ffc7411a505a87ea" protocol=ttrpc version=3 Oct 29 23:32:09.513424 containerd[1503]: time="2025-10-29T23:32:09.513384909Z" level=info msg="CreateContainer within sandbox \"d26b849cb6b45618e8ed6ea12a4d308d02244c49e0b81d24124545846b5c5883\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299\"" Oct 29 23:32:09.515375 containerd[1503]: time="2025-10-29T23:32:09.514063751Z" level=info msg="StartContainer for \"a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299\"" Oct 29 23:32:09.515375 containerd[1503]: time="2025-10-29T23:32:09.515042802Z" level=info msg="connecting to shim a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299" address="unix:///run/containerd/s/3649b7e35893e65675899d7135483b24cdcfb6b7d2cecb55b0b68573a2e3a6c7" protocol=ttrpc version=3 Oct 29 23:32:09.527318 kubelet[2296]: I1029 23:32:09.527287 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:32:09.527729 kubelet[2296]: E1029 23:32:09.527703 2296 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Oct 29 23:32:09.530386 systemd[1]: Started cri-containerd-65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7.scope - libcontainer container 65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7. Oct 29 23:32:09.538485 systemd[1]: Started cri-containerd-a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299.scope - libcontainer container a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299. Oct 29 23:32:09.553073 containerd[1503]: time="2025-10-29T23:32:09.552981439Z" level=info msg="StartContainer for \"1a76e8b1db5ea70ff478020fd5f27ec6f741962c555f12681ed21102283291a5\" returns successfully" Oct 29 23:32:09.587979 containerd[1503]: time="2025-10-29T23:32:09.587938104Z" level=info msg="StartContainer for \"a4770450fce2017ba38eceb1613003c4b773e90ec008d62221743d8888096299\" returns successfully" Oct 29 23:32:09.592955 containerd[1503]: time="2025-10-29T23:32:09.592599880Z" level=info msg="StartContainer for \"65e23ca0ebcaa31fb609d4672e61bbff380388e1b021054f34eb501657162fe7\" returns successfully" Oct 29 23:32:09.708295 kubelet[2296]: E1029 23:32:09.708259 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:09.712282 kubelet[2296]: E1029 23:32:09.710903 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:09.714233 kubelet[2296]: E1029 23:32:09.714209 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:10.330274 kubelet[2296]: I1029 23:32:10.330053 2296 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:32:10.716380 kubelet[2296]: E1029 23:32:10.716286 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:10.717550 kubelet[2296]: E1029 23:32:10.717361 2296 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 23:32:10.849506 kubelet[2296]: E1029 23:32:10.849473 2296 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 23:32:10.914466 kubelet[2296]: I1029 23:32:10.914427 2296 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 23:32:10.915259 kubelet[2296]: E1029 23:32:10.915105 2296 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18731a3d08b626b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 23:32:08.673478327 +0000 UTC m=+0.804133231,LastTimestamp:2025-10-29 23:32:08.673478327 +0000 UTC m=+0.804133231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 23:32:10.915927 kubelet[2296]: E1029 23:32:10.915900 2296 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 23:32:10.983730 kubelet[2296]: I1029 23:32:10.983608 2296 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:10.989563 kubelet[2296]: E1029 23:32:10.989528 2296 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:10.989563 kubelet[2296]: I1029 23:32:10.989558 2296 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:10.992025 kubelet[2296]: E1029 23:32:10.991893 2296 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:10.992025 kubelet[2296]: I1029 23:32:10.991917 2296 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:32:10.994019 kubelet[2296]: E1029 23:32:10.993996 2296 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 23:32:11.668549 kubelet[2296]: I1029 23:32:11.668512 2296 apiserver.go:52] "Watching apiserver" Oct 29 23:32:11.684176 kubelet[2296]: I1029 23:32:11.684121 2296 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 23:32:11.716833 kubelet[2296]: I1029 23:32:11.716798 2296 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:11.719064 kubelet[2296]: E1029 23:32:11.719034 2296 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:12.805277 systemd[1]: Reload requested from client PID 2576 ('systemctl') (unit session-7.scope)... Oct 29 23:32:12.805292 systemd[1]: Reloading... Oct 29 23:32:12.883322 zram_generator::config[2622]: No configuration found. Oct 29 23:32:13.047532 systemd[1]: Reloading finished in 241 ms. Oct 29 23:32:13.077230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:32:13.094558 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 23:32:13.095360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:13.095428 systemd[1]: kubelet.service: Consumed 1.185s CPU time, 128.7M memory peak. Oct 29 23:32:13.097177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:32:13.254933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:32:13.276787 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:32:13.316812 kubelet[2661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:32:13.316812 kubelet[2661]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:32:13.316812 kubelet[2661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:32:13.317137 kubelet[2661]: I1029 23:32:13.316858 2661 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:32:13.325162 kubelet[2661]: I1029 23:32:13.325126 2661 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 29 23:32:13.325162 kubelet[2661]: I1029 23:32:13.325157 2661 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:32:13.325660 kubelet[2661]: I1029 23:32:13.325625 2661 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 23:32:13.326954 kubelet[2661]: I1029 23:32:13.326927 2661 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 23:32:13.329227 kubelet[2661]: I1029 23:32:13.329147 2661 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:32:13.338549 kubelet[2661]: I1029 23:32:13.337815 2661 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:32:13.340572 kubelet[2661]: I1029 23:32:13.340352 2661 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 23:32:13.340572 kubelet[2661]: I1029 23:32:13.340531 2661 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:32:13.340852 kubelet[2661]: I1029 23:32:13.340552 2661 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:32:13.340852 kubelet[2661]: I1029 23:32:13.340729 2661 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:32:13.340852 kubelet[2661]: I1029 23:32:13.340737 2661 container_manager_linux.go:303] "Creating device plugin manager" Oct 29 23:32:13.340852 kubelet[2661]: I1029 23:32:13.340779 2661 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:32:13.341300 kubelet[2661]: I1029 23:32:13.340917 2661 kubelet.go:480] "Attempting to sync node with API server" Oct 29 23:32:13.341300 kubelet[2661]: I1029 23:32:13.340931 2661 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:32:13.341300 kubelet[2661]: I1029 23:32:13.340954 2661 kubelet.go:386] "Adding apiserver pod source" Oct 29 23:32:13.341300 kubelet[2661]: I1029 23:32:13.340966 2661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:32:13.342019 kubelet[2661]: I1029 23:32:13.341980 2661 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:32:13.342868 kubelet[2661]: I1029 23:32:13.342814 2661 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 23:32:13.347677 kubelet[2661]: I1029 23:32:13.347587 2661 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 23:32:13.347677 kubelet[2661]: I1029 23:32:13.347637 2661 server.go:1289] "Started kubelet" Oct 29 23:32:13.357412 kubelet[2661]: I1029 23:32:13.356816 2661 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:32:13.357744 kubelet[2661]: I1029 23:32:13.357712 2661 server.go:317] "Adding debug handlers to kubelet server" Oct 29 23:32:13.360401 kubelet[2661]: I1029 23:32:13.360349 2661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:32:13.360789 kubelet[2661]: I1029 23:32:13.360768 2661 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:32:13.362235 kubelet[2661]: I1029 23:32:13.362210 2661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:32:13.363114 kubelet[2661]: I1029 23:32:13.362671 2661 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:32:13.364170 kubelet[2661]: I1029 23:32:13.364146 2661 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 23:32:13.364391 kubelet[2661]: E1029 23:32:13.364365 2661 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 23:32:13.364449 kubelet[2661]: I1029 23:32:13.364400 2661 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 23:32:13.364550 kubelet[2661]: I1029 23:32:13.364530 2661 reconciler.go:26] "Reconciler: start to sync state" Oct 29 23:32:13.365846 kubelet[2661]: I1029 23:32:13.365799 2661 factory.go:223] Registration of the systemd container factory successfully Oct 29 23:32:13.365935 kubelet[2661]: I1029 23:32:13.365908 2661 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:32:13.366957 kubelet[2661]: I1029 23:32:13.366932 2661 factory.go:223] Registration of the containerd container factory successfully Oct 29 23:32:13.374370 kubelet[2661]: I1029 23:32:13.374329 2661 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 29 23:32:13.375774 kubelet[2661]: I1029 23:32:13.375737 2661 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 29 23:32:13.375774 kubelet[2661]: I1029 23:32:13.375766 2661 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 29 23:32:13.375843 kubelet[2661]: I1029 23:32:13.375789 2661 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:32:13.375843 kubelet[2661]: I1029 23:32:13.375795 2661 kubelet.go:2436] "Starting kubelet main sync loop" Oct 29 23:32:13.375843 kubelet[2661]: E1029 23:32:13.375834 2661 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:32:13.406947 kubelet[2661]: I1029 23:32:13.406917 2661 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:32:13.406947 kubelet[2661]: I1029 23:32:13.406938 2661 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:32:13.406947 kubelet[2661]: I1029 23:32:13.406961 2661 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:32:13.407129 kubelet[2661]: I1029 23:32:13.407097 2661 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 23:32:13.407129 kubelet[2661]: I1029 23:32:13.407108 2661 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 23:32:13.407129 kubelet[2661]: I1029 23:32:13.407126 2661 policy_none.go:49] "None policy: Start" Oct 29 23:32:13.407201 kubelet[2661]: I1029 23:32:13.407135 2661 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 23:32:13.407201 kubelet[2661]: I1029 23:32:13.407144 2661 state_mem.go:35] "Initializing new in-memory state store" Oct 29 23:32:13.407288 kubelet[2661]: I1029 23:32:13.407268 2661 state_mem.go:75] "Updated machine memory state" Oct 29 23:32:13.411481 kubelet[2661]: E1029 23:32:13.411398 2661 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 23:32:13.411617 kubelet[2661]: I1029 23:32:13.411577 2661 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:32:13.411617 kubelet[2661]: I1029 23:32:13.411590 2661 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:32:13.411833 kubelet[2661]: I1029 23:32:13.411757 2661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:32:13.413606 kubelet[2661]: E1029 23:32:13.413577 2661 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:32:13.477299 kubelet[2661]: I1029 23:32:13.477150 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 23:32:13.477299 kubelet[2661]: I1029 23:32:13.477226 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:13.477448 kubelet[2661]: I1029 23:32:13.477319 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:13.516293 kubelet[2661]: I1029 23:32:13.516266 2661 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 23:32:13.523462 kubelet[2661]: I1029 23:32:13.523436 2661 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 23:32:13.523549 kubelet[2661]: I1029 23:32:13.523520 2661 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 23:32:13.566393 kubelet[2661]: I1029 23:32:13.566346 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 29 23:32:13.566523 kubelet[2661]: I1029 23:32:13.566427 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:13.566523 kubelet[2661]: I1029 23:32:13.566450 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:13.566523 kubelet[2661]: I1029 23:32:13.566463 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:13.566523 kubelet[2661]: I1029 23:32:13.566494 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:13.566633 kubelet[2661]: I1029 23:32:13.566538 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:13.566633 kubelet[2661]: I1029 23:32:13.566590 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1988f5634626e1d263bc833ae42b1329-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1988f5634626e1d263bc833ae42b1329\") " pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:13.566633 kubelet[2661]: I1029 23:32:13.566620 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:13.566710 kubelet[2661]: I1029 23:32:13.566638 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 23:32:14.341641 kubelet[2661]: I1029 23:32:14.341347 2661 apiserver.go:52] "Watching apiserver" Oct 29 23:32:14.365025 kubelet[2661]: I1029 23:32:14.364965 2661 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 23:32:14.393318 kubelet[2661]: I1029 23:32:14.392944 2661 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:14.399235 kubelet[2661]: E1029 23:32:14.399197 2661 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 23:32:14.419594 kubelet[2661]: I1029 23:32:14.419524 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.419508271 podStartE2EDuration="1.419508271s" podCreationTimestamp="2025-10-29 23:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:14.410588489 +0000 UTC m=+1.130052889" watchObservedRunningTime="2025-10-29 23:32:14.419508271 +0000 UTC m=+1.138972671" Oct 29 23:32:14.430264 kubelet[2661]: I1029 23:32:14.430113 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.430095708 podStartE2EDuration="1.430095708s" podCreationTimestamp="2025-10-29 23:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:14.419662594 +0000 UTC m=+1.139127034" watchObservedRunningTime="2025-10-29 23:32:14.430095708 +0000 UTC m=+1.149560148" Oct 29 23:32:14.441122 kubelet[2661]: I1029 23:32:14.440880 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.44086277 podStartE2EDuration="1.44086277s" podCreationTimestamp="2025-10-29 23:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:14.431277735 +0000 UTC m=+1.150742175" watchObservedRunningTime="2025-10-29 23:32:14.44086277 +0000 UTC m=+1.160327210" Oct 29 23:32:19.883506 kubelet[2661]: I1029 23:32:19.883366 2661 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 23:32:19.883913 containerd[1503]: time="2025-10-29T23:32:19.883678354Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 23:32:19.884750 kubelet[2661]: I1029 23:32:19.884260 2661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 23:32:20.542930 systemd[1]: Created slice kubepods-besteffort-pod8894824d_d356_456a_b258_bd568db5c932.slice - libcontainer container kubepods-besteffort-pod8894824d_d356_456a_b258_bd568db5c932.slice. Oct 29 23:32:20.605054 kubelet[2661]: I1029 23:32:20.605003 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8894824d-d356-456a-b258-bd568db5c932-kube-proxy\") pod \"kube-proxy-vhkgx\" (UID: \"8894824d-d356-456a-b258-bd568db5c932\") " pod="kube-system/kube-proxy-vhkgx" Oct 29 23:32:20.605054 kubelet[2661]: I1029 23:32:20.605051 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8894824d-d356-456a-b258-bd568db5c932-xtables-lock\") pod \"kube-proxy-vhkgx\" (UID: \"8894824d-d356-456a-b258-bd568db5c932\") " pod="kube-system/kube-proxy-vhkgx" Oct 29 23:32:20.605197 kubelet[2661]: I1029 23:32:20.605068 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68q7\" (UniqueName: \"kubernetes.io/projected/8894824d-d356-456a-b258-bd568db5c932-kube-api-access-x68q7\") pod \"kube-proxy-vhkgx\" (UID: \"8894824d-d356-456a-b258-bd568db5c932\") " pod="kube-system/kube-proxy-vhkgx" Oct 29 23:32:20.605197 kubelet[2661]: I1029 23:32:20.605097 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8894824d-d356-456a-b258-bd568db5c932-lib-modules\") pod \"kube-proxy-vhkgx\" (UID: \"8894824d-d356-456a-b258-bd568db5c932\") " pod="kube-system/kube-proxy-vhkgx" Oct 29 23:32:20.719762 kubelet[2661]: E1029 23:32:20.719699 2661 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 29 23:32:20.719762 kubelet[2661]: E1029 23:32:20.719740 2661 projected.go:194] Error preparing data for projected volume kube-api-access-x68q7 for pod kube-system/kube-proxy-vhkgx: configmap "kube-root-ca.crt" not found Oct 29 23:32:20.719916 kubelet[2661]: E1029 23:32:20.719814 2661 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8894824d-d356-456a-b258-bd568db5c932-kube-api-access-x68q7 podName:8894824d-d356-456a-b258-bd568db5c932 nodeName:}" failed. No retries permitted until 2025-10-29 23:32:21.219787608 +0000 UTC m=+7.939252048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x68q7" (UniqueName: "kubernetes.io/projected/8894824d-d356-456a-b258-bd568db5c932-kube-api-access-x68q7") pod "kube-proxy-vhkgx" (UID: "8894824d-d356-456a-b258-bd568db5c932") : configmap "kube-root-ca.crt" not found Oct 29 23:32:21.108944 kubelet[2661]: I1029 23:32:21.108895 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24cl\" (UniqueName: \"kubernetes.io/projected/6d9e6a66-d64d-48ca-9fbf-68a6221730cb-kube-api-access-l24cl\") pod \"tigera-operator-7dcd859c48-rml8v\" (UID: \"6d9e6a66-d64d-48ca-9fbf-68a6221730cb\") " pod="tigera-operator/tigera-operator-7dcd859c48-rml8v" Oct 29 23:32:21.108944 kubelet[2661]: I1029 23:32:21.108935 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d9e6a66-d64d-48ca-9fbf-68a6221730cb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rml8v\" (UID: \"6d9e6a66-d64d-48ca-9fbf-68a6221730cb\") " pod="tigera-operator/tigera-operator-7dcd859c48-rml8v" Oct 29 23:32:21.112150 systemd[1]: Created slice kubepods-besteffort-pod6d9e6a66_d64d_48ca_9fbf_68a6221730cb.slice - libcontainer container kubepods-besteffort-pod6d9e6a66_d64d_48ca_9fbf_68a6221730cb.slice. Oct 29 23:32:21.415965 containerd[1503]: time="2025-10-29T23:32:21.415841341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rml8v,Uid:6d9e6a66-d64d-48ca-9fbf-68a6221730cb,Namespace:tigera-operator,Attempt:0,}" Oct 29 23:32:21.431430 containerd[1503]: time="2025-10-29T23:32:21.431383588Z" level=info msg="connecting to shim a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326" address="unix:///run/containerd/s/995b2d14823f8067ff1fb8e71b320263d68b81e25be5b1b6012be81b81952800" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:21.462027 containerd[1503]: time="2025-10-29T23:32:21.461968671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhkgx,Uid:8894824d-d356-456a-b258-bd568db5c932,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:21.463423 systemd[1]: Started cri-containerd-a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326.scope - libcontainer container a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326. Oct 29 23:32:21.479742 containerd[1503]: time="2025-10-29T23:32:21.479675156Z" level=info msg="connecting to shim 935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110" address="unix:///run/containerd/s/6242090142b5221a15c6e064fe3a1cc026ced5127b76e80cf830d064472b8036" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:21.505448 systemd[1]: Started cri-containerd-935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110.scope - libcontainer container 935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110. Oct 29 23:32:21.508493 containerd[1503]: time="2025-10-29T23:32:21.508440509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rml8v,Uid:6d9e6a66-d64d-48ca-9fbf-68a6221730cb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326\"" Oct 29 23:32:21.512408 containerd[1503]: time="2025-10-29T23:32:21.512368038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 23:32:21.533755 containerd[1503]: time="2025-10-29T23:32:21.533715382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhkgx,Uid:8894824d-d356-456a-b258-bd568db5c932,Namespace:kube-system,Attempt:0,} returns sandbox id \"935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110\"" Oct 29 23:32:21.538364 containerd[1503]: time="2025-10-29T23:32:21.538298027Z" level=info msg="CreateContainer within sandbox \"935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 23:32:21.546293 containerd[1503]: time="2025-10-29T23:32:21.546203920Z" level=info msg="Container 7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:21.552489 containerd[1503]: time="2025-10-29T23:32:21.552449230Z" level=info msg="CreateContainer within sandbox \"935aab396a01ad57b52c841dc3cfe65a2323ea5ff48c87d1b409849e14bc3110\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b\"" Oct 29 23:32:21.553112 containerd[1503]: time="2025-10-29T23:32:21.552966934Z" level=info msg="StartContainer for \"7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b\"" Oct 29 23:32:21.554372 containerd[1503]: time="2025-10-29T23:32:21.554275083Z" level=info msg="connecting to shim 7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b" address="unix:///run/containerd/s/6242090142b5221a15c6e064fe3a1cc026ced5127b76e80cf830d064472b8036" protocol=ttrpc version=3 Oct 29 23:32:21.581446 systemd[1]: Started cri-containerd-7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b.scope - libcontainer container 7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b. Oct 29 23:32:21.613201 containerd[1503]: time="2025-10-29T23:32:21.613166093Z" level=info msg="StartContainer for \"7137b77eb725609d815ae95e665df7fcd14c7667651f7fdbdf2092f74cd3314b\" returns successfully" Oct 29 23:32:22.990885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104895852.mount: Deactivated successfully. Oct 29 23:32:23.034353 kubelet[2661]: I1029 23:32:23.034217 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhkgx" podStartSLOduration=3.034201177 podStartE2EDuration="3.034201177s" podCreationTimestamp="2025-10-29 23:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:22.419428716 +0000 UTC m=+9.138893156" watchObservedRunningTime="2025-10-29 23:32:23.034201177 +0000 UTC m=+9.753665617" Oct 29 23:32:24.307281 containerd[1503]: time="2025-10-29T23:32:24.307127661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:24.308188 containerd[1503]: time="2025-10-29T23:32:24.308000115Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 29 23:32:24.308984 containerd[1503]: time="2025-10-29T23:32:24.308948291Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:24.311426 containerd[1503]: time="2025-10-29T23:32:24.311393435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:24.312280 containerd[1503]: time="2025-10-29T23:32:24.312104477Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.799549796s" Oct 29 23:32:24.312280 containerd[1503]: time="2025-10-29T23:32:24.312150983Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 29 23:32:24.319711 containerd[1503]: time="2025-10-29T23:32:24.319669877Z" level=info msg="CreateContainer within sandbox \"a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 23:32:24.341270 containerd[1503]: time="2025-10-29T23:32:24.341001305Z" level=info msg="Container 8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:24.346909 containerd[1503]: time="2025-10-29T23:32:24.346851694Z" level=info msg="CreateContainer within sandbox \"a2f37537ec2f1e5d83be21534096c7538fb7e9fbf487df5bb5a249cac1df3326\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d\"" Oct 29 23:32:24.347887 containerd[1503]: time="2025-10-29T23:32:24.347686887Z" level=info msg="StartContainer for \"8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d\"" Oct 29 23:32:24.348710 containerd[1503]: time="2025-10-29T23:32:24.348650752Z" level=info msg="connecting to shim 8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d" address="unix:///run/containerd/s/995b2d14823f8067ff1fb8e71b320263d68b81e25be5b1b6012be81b81952800" protocol=ttrpc version=3 Oct 29 23:32:24.370381 systemd[1]: Started cri-containerd-8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d.scope - libcontainer container 8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d. Oct 29 23:32:24.402280 containerd[1503]: time="2025-10-29T23:32:24.402228022Z" level=info msg="StartContainer for \"8db1a6ac232460d1c0cf7721afc8de0955c6d8f1efb03626da7710885f57177d\" returns successfully" Oct 29 23:32:24.425604 kubelet[2661]: I1029 23:32:24.425290 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rml8v" podStartSLOduration=0.621245091 podStartE2EDuration="3.42527298s" podCreationTimestamp="2025-10-29 23:32:21 +0000 UTC" firstStartedPulling="2025-10-29 23:32:21.510932484 +0000 UTC m=+8.230396924" lastFinishedPulling="2025-10-29 23:32:24.314960413 +0000 UTC m=+11.034424813" observedRunningTime="2025-10-29 23:32:24.424975691 +0000 UTC m=+11.144440131" watchObservedRunningTime="2025-10-29 23:32:24.42527298 +0000 UTC m=+11.144737420" Oct 29 23:32:27.501951 update_engine[1487]: I20251029 23:32:27.501415 1487 update_attempter.cc:509] Updating boot flags... Oct 29 23:32:29.614340 sudo[1717]: pam_unix(sudo:session): session closed for user root Oct 29 23:32:29.616268 sshd[1716]: Connection closed by 10.0.0.1 port 34888 Oct 29 23:32:29.616816 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:29.620407 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:34888.service: Deactivated successfully. Oct 29 23:32:29.622647 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 23:32:29.622907 systemd[1]: session-7.scope: Consumed 7.685s CPU time, 221.5M memory peak. Oct 29 23:32:29.624038 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Oct 29 23:32:29.626211 systemd-logind[1486]: Removed session 7. Oct 29 23:32:37.563673 systemd[1]: Created slice kubepods-besteffort-podb5364a24_65c7_475d_ad0f_62cb4478fa0d.slice - libcontainer container kubepods-besteffort-podb5364a24_65c7_475d_ad0f_62cb4478fa0d.slice. Oct 29 23:32:37.624279 kubelet[2661]: I1029 23:32:37.624215 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5364a24-65c7-475d-ad0f-62cb4478fa0d-typha-certs\") pod \"calico-typha-5dbd44d7df-d8psv\" (UID: \"b5364a24-65c7-475d-ad0f-62cb4478fa0d\") " pod="calico-system/calico-typha-5dbd44d7df-d8psv" Oct 29 23:32:37.624279 kubelet[2661]: I1029 23:32:37.624278 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqzt\" (UniqueName: \"kubernetes.io/projected/b5364a24-65c7-475d-ad0f-62cb4478fa0d-kube-api-access-pcqzt\") pod \"calico-typha-5dbd44d7df-d8psv\" (UID: \"b5364a24-65c7-475d-ad0f-62cb4478fa0d\") " pod="calico-system/calico-typha-5dbd44d7df-d8psv" Oct 29 23:32:37.624665 kubelet[2661]: I1029 23:32:37.624299 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5364a24-65c7-475d-ad0f-62cb4478fa0d-tigera-ca-bundle\") pod \"calico-typha-5dbd44d7df-d8psv\" (UID: \"b5364a24-65c7-475d-ad0f-62cb4478fa0d\") " pod="calico-system/calico-typha-5dbd44d7df-d8psv" Oct 29 23:32:37.723096 systemd[1]: Created slice kubepods-besteffort-podb2051169_1a85_4899_88bb_08f2e9cb9a43.slice - libcontainer container kubepods-besteffort-podb2051169_1a85_4899_88bb_08f2e9cb9a43.slice. Oct 29 23:32:37.724876 kubelet[2661]: I1029 23:32:37.724835 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-var-lib-calico\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724876 kubelet[2661]: I1029 23:32:37.724877 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-cni-net-dir\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724990 kubelet[2661]: I1029 23:32:37.724893 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-policysync\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724990 kubelet[2661]: I1029 23:32:37.724909 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2051169-1a85-4899-88bb-08f2e9cb9a43-tigera-ca-bundle\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724990 kubelet[2661]: I1029 23:32:37.724926 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgkbv\" (UniqueName: \"kubernetes.io/projected/b2051169-1a85-4899-88bb-08f2e9cb9a43-kube-api-access-bgkbv\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724990 kubelet[2661]: I1029 23:32:37.724955 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-var-run-calico\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.724990 kubelet[2661]: I1029 23:32:37.724980 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-flexvol-driver-host\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.725089 kubelet[2661]: I1029 23:32:37.724994 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-lib-modules\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.725089 kubelet[2661]: I1029 23:32:37.725023 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b2051169-1a85-4899-88bb-08f2e9cb9a43-node-certs\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.725089 kubelet[2661]: I1029 23:32:37.725037 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-xtables-lock\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.725089 kubelet[2661]: I1029 23:32:37.725052 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-cni-log-dir\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.725089 kubelet[2661]: I1029 23:32:37.725069 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b2051169-1a85-4899-88bb-08f2e9cb9a43-cni-bin-dir\") pod \"calico-node-blqxz\" (UID: \"b2051169-1a85-4899-88bb-08f2e9cb9a43\") " pod="calico-system/calico-node-blqxz" Oct 29 23:32:37.833792 kubelet[2661]: E1029 23:32:37.833099 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:37.833792 kubelet[2661]: W1029 23:32:37.833156 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:37.833792 kubelet[2661]: E1029 23:32:37.833179 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:37.837866 kubelet[2661]: E1029 23:32:37.837839 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:37.837866 kubelet[2661]: W1029 23:32:37.837861 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:37.837992 kubelet[2661]: E1029 23:32:37.837878 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:37.868774 containerd[1503]: time="2025-10-29T23:32:37.868723240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dbd44d7df-d8psv,Uid:b5364a24-65c7-475d-ad0f-62cb4478fa0d,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:37.901776 containerd[1503]: time="2025-10-29T23:32:37.901725287Z" level=info msg="connecting to shim d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699" address="unix:///run/containerd/s/5389f5c815b7861686829f6415bf6a863e43462966742ba5c7748d2c5d41bf4d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:37.939412 systemd[1]: Started cri-containerd-d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699.scope - libcontainer container d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699. Oct 29 23:32:37.993619 kubelet[2661]: E1029 23:32:37.993564 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:32:38.019563 kubelet[2661]: E1029 23:32:38.019528 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.019563 kubelet[2661]: W1029 23:32:38.019558 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.019711 kubelet[2661]: E1029 23:32:38.019579 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.019757 kubelet[2661]: E1029 23:32:38.019733 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.019802 kubelet[2661]: W1029 23:32:38.019753 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.019833 kubelet[2661]: E1029 23:32:38.019803 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.021371 kubelet[2661]: E1029 23:32:38.021350 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.021371 kubelet[2661]: W1029 23:32:38.021369 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.021484 kubelet[2661]: E1029 23:32:38.021382 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.021569 kubelet[2661]: E1029 23:32:38.021549 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.021594 kubelet[2661]: W1029 23:32:38.021568 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.021594 kubelet[2661]: E1029 23:32:38.021578 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.021779 kubelet[2661]: E1029 23:32:38.021748 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.021779 kubelet[2661]: W1029 23:32:38.021757 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.021779 kubelet[2661]: E1029 23:32:38.021764 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.021906 kubelet[2661]: E1029 23:32:38.021897 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.021906 kubelet[2661]: W1029 23:32:38.021906 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.021985 kubelet[2661]: E1029 23:32:38.021913 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022053 kubelet[2661]: E1029 23:32:38.022043 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022053 kubelet[2661]: W1029 23:32:38.022052 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022113 kubelet[2661]: E1029 23:32:38.022060 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022200 kubelet[2661]: E1029 23:32:38.022189 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022200 kubelet[2661]: W1029 23:32:38.022198 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022256 kubelet[2661]: E1029 23:32:38.022206 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022375 kubelet[2661]: E1029 23:32:38.022363 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022375 kubelet[2661]: W1029 23:32:38.022375 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022444 kubelet[2661]: E1029 23:32:38.022384 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022596 kubelet[2661]: E1029 23:32:38.022582 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022596 kubelet[2661]: W1029 23:32:38.022596 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022648 kubelet[2661]: E1029 23:32:38.022605 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022924 kubelet[2661]: E1029 23:32:38.022756 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022924 kubelet[2661]: W1029 23:32:38.022769 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022924 kubelet[2661]: E1029 23:32:38.022780 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.022924 kubelet[2661]: E1029 23:32:38.022914 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.022924 kubelet[2661]: W1029 23:32:38.022921 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.022924 kubelet[2661]: E1029 23:32:38.022929 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.023157 kubelet[2661]: E1029 23:32:38.023141 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.023157 kubelet[2661]: W1029 23:32:38.023155 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.023247 kubelet[2661]: E1029 23:32:38.023165 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.023398 kubelet[2661]: E1029 23:32:38.023323 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.023398 kubelet[2661]: W1029 23:32:38.023336 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.023398 kubelet[2661]: E1029 23:32:38.023348 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.023518 kubelet[2661]: E1029 23:32:38.023504 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.023518 kubelet[2661]: W1029 23:32:38.023516 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.023586 kubelet[2661]: E1029 23:32:38.023525 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.023714 kubelet[2661]: E1029 23:32:38.023650 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.023714 kubelet[2661]: W1029 23:32:38.023660 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.023714 kubelet[2661]: E1029 23:32:38.023668 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.023855 kubelet[2661]: E1029 23:32:38.023843 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.023881 kubelet[2661]: W1029 23:32:38.023855 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.023881 kubelet[2661]: E1029 23:32:38.023866 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.024000 kubelet[2661]: E1029 23:32:38.023990 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.024030 kubelet[2661]: W1029 23:32:38.024002 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.024030 kubelet[2661]: E1029 23:32:38.024010 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.024131 kubelet[2661]: E1029 23:32:38.024122 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.024131 kubelet[2661]: W1029 23:32:38.024131 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.024186 kubelet[2661]: E1029 23:32:38.024138 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.024279 kubelet[2661]: E1029 23:32:38.024268 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.024279 kubelet[2661]: W1029 23:32:38.024278 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.024346 kubelet[2661]: E1029 23:32:38.024286 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.027785 kubelet[2661]: E1029 23:32:38.027647 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.027785 kubelet[2661]: W1029 23:32:38.027662 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.027785 kubelet[2661]: E1029 23:32:38.027674 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.027785 kubelet[2661]: I1029 23:32:38.027704 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1344333c-5c8b-4908-b272-45117ec8f68a-socket-dir\") pod \"csi-node-driver-9jrl9\" (UID: \"1344333c-5c8b-4908-b272-45117ec8f68a\") " pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:38.028005 kubelet[2661]: E1029 23:32:38.027990 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.028169 kubelet[2661]: W1029 23:32:38.028045 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.028169 kubelet[2661]: E1029 23:32:38.028057 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.028169 kubelet[2661]: I1029 23:32:38.028100 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1344333c-5c8b-4908-b272-45117ec8f68a-kubelet-dir\") pod \"csi-node-driver-9jrl9\" (UID: \"1344333c-5c8b-4908-b272-45117ec8f68a\") " pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:38.028374 kubelet[2661]: E1029 23:32:38.028344 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.028374 kubelet[2661]: W1029 23:32:38.028365 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.028445 kubelet[2661]: E1029 23:32:38.028385 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.028623 kubelet[2661]: E1029 23:32:38.028598 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.028623 kubelet[2661]: W1029 23:32:38.028612 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.028692 kubelet[2661]: E1029 23:32:38.028622 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.028877 kubelet[2661]: E1029 23:32:38.028862 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.028877 kubelet[2661]: W1029 23:32:38.028875 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.028943 kubelet[2661]: E1029 23:32:38.028884 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.028943 kubelet[2661]: I1029 23:32:38.028907 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1344333c-5c8b-4908-b272-45117ec8f68a-registration-dir\") pod \"csi-node-driver-9jrl9\" (UID: \"1344333c-5c8b-4908-b272-45117ec8f68a\") " pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:38.029075 kubelet[2661]: E1029 23:32:38.029060 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.029075 kubelet[2661]: W1029 23:32:38.029075 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.029141 kubelet[2661]: E1029 23:32:38.029084 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.029141 kubelet[2661]: I1029 23:32:38.029102 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1344333c-5c8b-4908-b272-45117ec8f68a-varrun\") pod \"csi-node-driver-9jrl9\" (UID: \"1344333c-5c8b-4908-b272-45117ec8f68a\") " pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:38.029586 kubelet[2661]: E1029 23:32:38.029561 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.029586 kubelet[2661]: W1029 23:32:38.029581 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.029657 kubelet[2661]: E1029 23:32:38.029592 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.029724 kubelet[2661]: I1029 23:32:38.029696 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbh69\" (UniqueName: \"kubernetes.io/projected/1344333c-5c8b-4908-b272-45117ec8f68a-kube-api-access-gbh69\") pod \"csi-node-driver-9jrl9\" (UID: \"1344333c-5c8b-4908-b272-45117ec8f68a\") " pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:38.029816 kubelet[2661]: E1029 23:32:38.029799 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.029816 kubelet[2661]: W1029 23:32:38.029813 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.029925 kubelet[2661]: E1029 23:32:38.029823 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.030597 containerd[1503]: time="2025-10-29T23:32:38.030551344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dbd44d7df-d8psv,Uid:b5364a24-65c7-475d-ad0f-62cb4478fa0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699\"" Oct 29 23:32:38.030720 kubelet[2661]: E1029 23:32:38.030702 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.030720 kubelet[2661]: W1029 23:32:38.030718 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.030780 kubelet[2661]: E1029 23:32:38.030731 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.031106 kubelet[2661]: E1029 23:32:38.031077 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.031106 kubelet[2661]: W1029 23:32:38.031089 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.031106 kubelet[2661]: E1029 23:32:38.031099 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.031462 kubelet[2661]: E1029 23:32:38.031449 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.031495 kubelet[2661]: W1029 23:32:38.031464 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.031495 kubelet[2661]: E1029 23:32:38.031476 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.031640 kubelet[2661]: E1029 23:32:38.031627 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.031640 kubelet[2661]: W1029 23:32:38.031638 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.031681 kubelet[2661]: E1029 23:32:38.031646 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.032095 kubelet[2661]: E1029 23:32:38.032073 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.032095 kubelet[2661]: W1029 23:32:38.032090 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.032199 kubelet[2661]: E1029 23:32:38.032101 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.032424 containerd[1503]: time="2025-10-29T23:32:38.032403123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 23:32:38.033406 kubelet[2661]: E1029 23:32:38.033380 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.033406 kubelet[2661]: W1029 23:32:38.033400 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.033458 kubelet[2661]: E1029 23:32:38.033413 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.034182 kubelet[2661]: E1029 23:32:38.034022 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.034182 kubelet[2661]: W1029 23:32:38.034044 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.034182 kubelet[2661]: E1029 23:32:38.034057 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.034571 containerd[1503]: time="2025-10-29T23:32:38.034540344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blqxz,Uid:b2051169-1a85-4899-88bb-08f2e9cb9a43,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:38.054413 containerd[1503]: time="2025-10-29T23:32:38.054289086Z" level=info msg="connecting to shim 0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c" address="unix:///run/containerd/s/7daf2f7d543404fd586607ec5fc843defca70e3de7b883b8c82894b11345ea19" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:38.078440 systemd[1]: Started cri-containerd-0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c.scope - libcontainer container 0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c. Oct 29 23:32:38.117090 containerd[1503]: time="2025-10-29T23:32:38.116862879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-blqxz,Uid:b2051169-1a85-4899-88bb-08f2e9cb9a43,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\"" Oct 29 23:32:38.130675 kubelet[2661]: E1029 23:32:38.130626 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.130675 kubelet[2661]: W1029 23:32:38.130659 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.130675 kubelet[2661]: E1029 23:32:38.130677 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.130926 kubelet[2661]: E1029 23:32:38.130896 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.130926 kubelet[2661]: W1029 23:32:38.130909 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.130926 kubelet[2661]: E1029 23:32:38.130918 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.131165 kubelet[2661]: E1029 23:32:38.131128 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.131165 kubelet[2661]: W1029 23:32:38.131150 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.131219 kubelet[2661]: E1029 23:32:38.131166 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.131362 kubelet[2661]: E1029 23:32:38.131345 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.131362 kubelet[2661]: W1029 23:32:38.131356 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.131414 kubelet[2661]: E1029 23:32:38.131365 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.131566 kubelet[2661]: E1029 23:32:38.131537 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.131566 kubelet[2661]: W1029 23:32:38.131548 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.131566 kubelet[2661]: E1029 23:32:38.131557 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.131738 kubelet[2661]: E1029 23:32:38.131725 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.131738 kubelet[2661]: W1029 23:32:38.131736 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.131780 kubelet[2661]: E1029 23:32:38.131745 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.131906 kubelet[2661]: E1029 23:32:38.131895 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.131929 kubelet[2661]: W1029 23:32:38.131905 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.131929 kubelet[2661]: E1029 23:32:38.131913 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.132086 kubelet[2661]: E1029 23:32:38.132076 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.132107 kubelet[2661]: W1029 23:32:38.132086 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.132107 kubelet[2661]: E1029 23:32:38.132094 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.132306 kubelet[2661]: E1029 23:32:38.132291 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.132328 kubelet[2661]: W1029 23:32:38.132307 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.132328 kubelet[2661]: E1029 23:32:38.132324 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.132506 kubelet[2661]: E1029 23:32:38.132493 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.132506 kubelet[2661]: W1029 23:32:38.132504 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.132547 kubelet[2661]: E1029 23:32:38.132511 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.132678 kubelet[2661]: E1029 23:32:38.132666 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.132699 kubelet[2661]: W1029 23:32:38.132677 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.132699 kubelet[2661]: E1029 23:32:38.132685 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.132920 kubelet[2661]: E1029 23:32:38.132891 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.132920 kubelet[2661]: W1029 23:32:38.132905 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.132920 kubelet[2661]: E1029 23:32:38.132914 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.133154 kubelet[2661]: E1029 23:32:38.133137 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.133184 kubelet[2661]: W1029 23:32:38.133162 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.133184 kubelet[2661]: E1029 23:32:38.133174 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.133915 kubelet[2661]: E1029 23:32:38.133882 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.133915 kubelet[2661]: W1029 23:32:38.133898 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.133915 kubelet[2661]: E1029 23:32:38.133910 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.134107 kubelet[2661]: E1029 23:32:38.134092 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.134107 kubelet[2661]: W1029 23:32:38.134104 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.134217 kubelet[2661]: E1029 23:32:38.134113 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.134302 kubelet[2661]: E1029 23:32:38.134289 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.134328 kubelet[2661]: W1029 23:32:38.134302 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.134328 kubelet[2661]: E1029 23:32:38.134310 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.134579 kubelet[2661]: E1029 23:32:38.134563 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.134579 kubelet[2661]: W1029 23:32:38.134578 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.134624 kubelet[2661]: E1029 23:32:38.134591 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.135074 kubelet[2661]: E1029 23:32:38.135058 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.135104 kubelet[2661]: W1029 23:32:38.135074 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.135104 kubelet[2661]: E1029 23:32:38.135087 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.135288 kubelet[2661]: E1029 23:32:38.135275 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.135288 kubelet[2661]: W1029 23:32:38.135287 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.135355 kubelet[2661]: E1029 23:32:38.135296 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.135480 kubelet[2661]: E1029 23:32:38.135466 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.135480 kubelet[2661]: W1029 23:32:38.135477 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.135691 kubelet[2661]: E1029 23:32:38.135485 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.135778 kubelet[2661]: E1029 23:32:38.135762 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.135833 kubelet[2661]: W1029 23:32:38.135822 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.135879 kubelet[2661]: E1029 23:32:38.135870 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.136110 kubelet[2661]: E1029 23:32:38.136098 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.136171 kubelet[2661]: W1029 23:32:38.136161 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.136227 kubelet[2661]: E1029 23:32:38.136217 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.136468 kubelet[2661]: E1029 23:32:38.136453 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.136530 kubelet[2661]: W1029 23:32:38.136519 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.136582 kubelet[2661]: E1029 23:32:38.136572 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.136926 kubelet[2661]: E1029 23:32:38.136809 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.136926 kubelet[2661]: W1029 23:32:38.136821 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.136926 kubelet[2661]: E1029 23:32:38.136831 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.137077 kubelet[2661]: E1029 23:32:38.137064 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.137127 kubelet[2661]: W1029 23:32:38.137117 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.137177 kubelet[2661]: E1029 23:32:38.137168 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:38.143632 kubelet[2661]: E1029 23:32:38.143578 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:38.143632 kubelet[2661]: W1029 23:32:38.143592 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:38.143632 kubelet[2661]: E1029 23:32:38.143605 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:39.282213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323578647.mount: Deactivated successfully. Oct 29 23:32:39.636900 containerd[1503]: time="2025-10-29T23:32:39.636846314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:39.637694 containerd[1503]: time="2025-10-29T23:32:39.637663262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 29 23:32:39.638808 containerd[1503]: time="2025-10-29T23:32:39.638751126Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:39.641106 containerd[1503]: time="2025-10-29T23:32:39.641082136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:39.642226 containerd[1503]: time="2025-10-29T23:32:39.642188405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.609757074s" Oct 29 23:32:39.642226 containerd[1503]: time="2025-10-29T23:32:39.642221214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 29 23:32:39.643284 containerd[1503]: time="2025-10-29T23:32:39.643263225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 23:32:39.654582 containerd[1503]: time="2025-10-29T23:32:39.654541212Z" level=info msg="CreateContainer within sandbox \"d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 23:32:39.661540 containerd[1503]: time="2025-10-29T23:32:39.661505635Z" level=info msg="Container 0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:39.667736 containerd[1503]: time="2025-10-29T23:32:39.667698683Z" level=info msg="CreateContainer within sandbox \"d270ff259d7e33a87636d3f0503e31b3ecf2c8caea01806e72cfd29f1aa67699\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807\"" Oct 29 23:32:39.669354 containerd[1503]: time="2025-10-29T23:32:39.668555242Z" level=info msg="StartContainer for \"0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807\"" Oct 29 23:32:39.671319 containerd[1503]: time="2025-10-29T23:32:39.671278122Z" level=info msg="connecting to shim 0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807" address="unix:///run/containerd/s/5389f5c815b7861686829f6415bf6a863e43462966742ba5c7748d2c5d41bf4d" protocol=ttrpc version=3 Oct 29 23:32:39.689431 systemd[1]: Started cri-containerd-0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807.scope - libcontainer container 0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807. Oct 29 23:32:39.729027 containerd[1503]: time="2025-10-29T23:32:39.728987464Z" level=info msg="StartContainer for \"0fd4555acba6c99cd83e78bf7132ecc76f1b88c92e044f6f774f4d54b7465807\" returns successfully" Oct 29 23:32:40.376753 kubelet[2661]: E1029 23:32:40.376687 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:32:40.485559 kubelet[2661]: I1029 23:32:40.485230 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dbd44d7df-d8psv" podStartSLOduration=1.874474159 podStartE2EDuration="3.485214112s" podCreationTimestamp="2025-10-29 23:32:37 +0000 UTC" firstStartedPulling="2025-10-29 23:32:38.032063824 +0000 UTC m=+24.751528264" lastFinishedPulling="2025-10-29 23:32:39.642803777 +0000 UTC m=+26.362268217" observedRunningTime="2025-10-29 23:32:40.482943503 +0000 UTC m=+27.202407983" watchObservedRunningTime="2025-10-29 23:32:40.485214112 +0000 UTC m=+27.204678552" Oct 29 23:32:40.526221 containerd[1503]: time="2025-10-29T23:32:40.526174571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:40.526628 containerd[1503]: time="2025-10-29T23:32:40.526598765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 29 23:32:40.527436 containerd[1503]: time="2025-10-29T23:32:40.527395299Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:40.534691 containerd[1503]: time="2025-10-29T23:32:40.534327557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:40.535132 containerd[1503]: time="2025-10-29T23:32:40.535105845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 891.814613ms" Oct 29 23:32:40.535186 containerd[1503]: time="2025-10-29T23:32:40.535139174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 29 23:32:40.539955 containerd[1503]: time="2025-10-29T23:32:40.539923417Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 23:32:40.542403 kubelet[2661]: E1029 23:32:40.542379 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.542403 kubelet[2661]: W1029 23:32:40.542402 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.544436 kubelet[2661]: E1029 23:32:40.544413 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.544643 kubelet[2661]: E1029 23:32:40.544630 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.546877 kubelet[2661]: W1029 23:32:40.544644 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.546877 kubelet[2661]: E1029 23:32:40.546874 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547087 kubelet[2661]: E1029 23:32:40.547076 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547087 kubelet[2661]: W1029 23:32:40.547087 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547146 kubelet[2661]: E1029 23:32:40.547097 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547260 kubelet[2661]: E1029 23:32:40.547234 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547293 kubelet[2661]: W1029 23:32:40.547259 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547293 kubelet[2661]: E1029 23:32:40.547268 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547435 kubelet[2661]: E1029 23:32:40.547424 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547435 kubelet[2661]: W1029 23:32:40.547435 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547487 kubelet[2661]: E1029 23:32:40.547444 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547588 kubelet[2661]: E1029 23:32:40.547579 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547588 kubelet[2661]: W1029 23:32:40.547588 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547647 kubelet[2661]: E1029 23:32:40.547596 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547721 kubelet[2661]: E1029 23:32:40.547711 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547721 kubelet[2661]: W1029 23:32:40.547720 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547778 kubelet[2661]: E1029 23:32:40.547727 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.547867 kubelet[2661]: E1029 23:32:40.547857 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.547867 kubelet[2661]: W1029 23:32:40.547867 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.547921 kubelet[2661]: E1029 23:32:40.547874 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.548018 kubelet[2661]: E1029 23:32:40.547999 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.548045 kubelet[2661]: W1029 23:32:40.548018 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.548045 kubelet[2661]: E1029 23:32:40.548029 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.548151 kubelet[2661]: E1029 23:32:40.548143 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.548151 kubelet[2661]: W1029 23:32:40.548151 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.548204 kubelet[2661]: E1029 23:32:40.548157 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.548375 kubelet[2661]: E1029 23:32:40.548345 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.548411 kubelet[2661]: W1029 23:32:40.548362 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.548411 kubelet[2661]: E1029 23:32:40.548389 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.549456 containerd[1503]: time="2025-10-29T23:32:40.549409439Z" level=info msg="Container 620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:40.550634 kubelet[2661]: E1029 23:32:40.550613 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.550634 kubelet[2661]: W1029 23:32:40.550630 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.550714 kubelet[2661]: E1029 23:32:40.550644 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.551384 kubelet[2661]: E1029 23:32:40.551369 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.551384 kubelet[2661]: W1029 23:32:40.551383 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.551464 kubelet[2661]: E1029 23:32:40.551397 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.551561 kubelet[2661]: E1029 23:32:40.551551 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.551587 kubelet[2661]: W1029 23:32:40.551561 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.551587 kubelet[2661]: E1029 23:32:40.551571 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.551696 kubelet[2661]: E1029 23:32:40.551687 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.551696 kubelet[2661]: W1029 23:32:40.551696 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.551751 kubelet[2661]: E1029 23:32:40.551704 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.551961 kubelet[2661]: E1029 23:32:40.551951 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.551961 kubelet[2661]: W1029 23:32:40.551961 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552054 kubelet[2661]: E1029 23:32:40.551970 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552124 kubelet[2661]: E1029 23:32:40.552113 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.552124 kubelet[2661]: W1029 23:32:40.552123 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552202 kubelet[2661]: E1029 23:32:40.552132 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552281 kubelet[2661]: E1029 23:32:40.552269 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.552281 kubelet[2661]: W1029 23:32:40.552278 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552342 kubelet[2661]: E1029 23:32:40.552286 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552449 kubelet[2661]: E1029 23:32:40.552440 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.552481 kubelet[2661]: W1029 23:32:40.552449 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552481 kubelet[2661]: E1029 23:32:40.552457 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552641 kubelet[2661]: E1029 23:32:40.552613 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.552641 kubelet[2661]: W1029 23:32:40.552621 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552641 kubelet[2661]: E1029 23:32:40.552629 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552930 kubelet[2661]: E1029 23:32:40.552746 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.552930 kubelet[2661]: W1029 23:32:40.552754 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.552930 kubelet[2661]: E1029 23:32:40.552760 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.552688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2832641165.mount: Deactivated successfully. Oct 29 23:32:40.553197 kubelet[2661]: E1029 23:32:40.553151 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.553197 kubelet[2661]: W1029 23:32:40.553161 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.553197 kubelet[2661]: E1029 23:32:40.553174 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.553569 kubelet[2661]: E1029 23:32:40.553469 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.553569 kubelet[2661]: W1029 23:32:40.553486 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.553569 kubelet[2661]: E1029 23:32:40.553498 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.553754 kubelet[2661]: E1029 23:32:40.553741 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.553824 kubelet[2661]: W1029 23:32:40.553814 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.553872 kubelet[2661]: E1029 23:32:40.553863 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.554136 kubelet[2661]: E1029 23:32:40.554124 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.554207 kubelet[2661]: W1029 23:32:40.554196 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.554271 kubelet[2661]: E1029 23:32:40.554261 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.554478 kubelet[2661]: E1029 23:32:40.554465 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.554639 kubelet[2661]: W1029 23:32:40.554532 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.554639 kubelet[2661]: E1029 23:32:40.554548 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.554755 kubelet[2661]: E1029 23:32:40.554745 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.554810 kubelet[2661]: W1029 23:32:40.554800 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.554858 kubelet[2661]: E1029 23:32:40.554849 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.555102 kubelet[2661]: E1029 23:32:40.555090 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.555261 kubelet[2661]: W1029 23:32:40.555163 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.555261 kubelet[2661]: E1029 23:32:40.555178 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.555815 kubelet[2661]: E1029 23:32:40.555505 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.555815 kubelet[2661]: W1029 23:32:40.555517 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.555815 kubelet[2661]: E1029 23:32:40.555528 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.555815 kubelet[2661]: E1029 23:32:40.555663 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.555815 kubelet[2661]: W1029 23:32:40.555691 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.555999 kubelet[2661]: E1029 23:32:40.555699 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.556442 kubelet[2661]: E1029 23:32:40.556186 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.556442 kubelet[2661]: W1029 23:32:40.556198 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.556442 kubelet[2661]: E1029 23:32:40.556211 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.556731 kubelet[2661]: E1029 23:32:40.556719 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.556794 kubelet[2661]: W1029 23:32:40.556784 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.556860 kubelet[2661]: E1029 23:32:40.556850 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.557132 kubelet[2661]: E1029 23:32:40.557120 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:32:40.557198 kubelet[2661]: W1029 23:32:40.557187 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:32:40.557258 kubelet[2661]: E1029 23:32:40.557249 2661 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:32:40.561511 containerd[1503]: time="2025-10-29T23:32:40.561469272Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\"" Oct 29 23:32:40.561949 containerd[1503]: time="2025-10-29T23:32:40.561923714Z" level=info msg="StartContainer for \"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\"" Oct 29 23:32:40.563712 containerd[1503]: time="2025-10-29T23:32:40.563669782Z" level=info msg="connecting to shim 620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae" address="unix:///run/containerd/s/7daf2f7d543404fd586607ec5fc843defca70e3de7b883b8c82894b11345ea19" protocol=ttrpc version=3 Oct 29 23:32:40.584430 systemd[1]: Started cri-containerd-620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae.scope - libcontainer container 620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae. Oct 29 23:32:40.635817 systemd[1]: cri-containerd-620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae.scope: Deactivated successfully. Oct 29 23:32:40.636103 systemd[1]: cri-containerd-620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae.scope: Consumed 29ms CPU time, 6.2M memory peak, 4.5M written to disk. Oct 29 23:32:40.683752 containerd[1503]: time="2025-10-29T23:32:40.683672629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\" id:\"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\" pid:3375 exited_at:{seconds:1761780760 nanos:657492251}" Oct 29 23:32:40.715525 containerd[1503]: time="2025-10-29T23:32:40.715482996Z" level=info msg="StartContainer for \"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\" returns successfully" Oct 29 23:32:40.719964 containerd[1503]: time="2025-10-29T23:32:40.719893738Z" level=info msg="received exit event container_id:\"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\" id:\"620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae\" pid:3375 exited_at:{seconds:1761780760 nanos:657492251}" Oct 29 23:32:40.749709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-620d33035fe30bf1f90a855371597767ad7af9acd69fff2c17c87af4f17b85ae-rootfs.mount: Deactivated successfully. Oct 29 23:32:41.467814 kubelet[2661]: I1029 23:32:41.467782 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:41.468848 containerd[1503]: time="2025-10-29T23:32:41.468809747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 23:32:42.376579 kubelet[2661]: E1029 23:32:42.376526 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:32:44.376364 kubelet[2661]: E1029 23:32:44.376301 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:32:45.304548 containerd[1503]: time="2025-10-29T23:32:45.304495747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:45.305605 containerd[1503]: time="2025-10-29T23:32:45.305386185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 29 23:32:45.306411 containerd[1503]: time="2025-10-29T23:32:45.306374685Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:45.309061 containerd[1503]: time="2025-10-29T23:32:45.309016073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:45.309871 containerd[1503]: time="2025-10-29T23:32:45.309821093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.840973696s" Oct 29 23:32:45.309951 containerd[1503]: time="2025-10-29T23:32:45.309876985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 29 23:32:45.314824 containerd[1503]: time="2025-10-29T23:32:45.314783837Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 23:32:45.324599 containerd[1503]: time="2025-10-29T23:32:45.324384855Z" level=info msg="Container 756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:45.335259 containerd[1503]: time="2025-10-29T23:32:45.335188980Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\"" Oct 29 23:32:45.335934 containerd[1503]: time="2025-10-29T23:32:45.335839725Z" level=info msg="StartContainer for \"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\"" Oct 29 23:32:45.337361 containerd[1503]: time="2025-10-29T23:32:45.337333018Z" level=info msg="connecting to shim 756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790" address="unix:///run/containerd/s/7daf2f7d543404fd586607ec5fc843defca70e3de7b883b8c82894b11345ea19" protocol=ttrpc version=3 Oct 29 23:32:45.356538 systemd[1]: Started cri-containerd-756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790.scope - libcontainer container 756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790. Oct 29 23:32:45.404742 containerd[1503]: time="2025-10-29T23:32:45.404679292Z" level=info msg="StartContainer for \"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\" returns successfully" Oct 29 23:32:45.952503 systemd[1]: cri-containerd-756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790.scope: Deactivated successfully. Oct 29 23:32:45.952799 systemd[1]: cri-containerd-756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790.scope: Consumed 495ms CPU time, 177M memory peak, 2.9M read from disk, 165.9M written to disk. Oct 29 23:32:45.956757 containerd[1503]: time="2025-10-29T23:32:45.956714356Z" level=info msg="received exit event container_id:\"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\" id:\"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\" pid:3433 exited_at:{seconds:1761780765 nanos:956137587}" Oct 29 23:32:45.957030 containerd[1503]: time="2025-10-29T23:32:45.957000539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\" id:\"756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790\" pid:3433 exited_at:{seconds:1761780765 nanos:956137587}" Oct 29 23:32:45.975667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-756fde0cceeb0030118f8d4a3bbfd146a4f9eee35b32a12f49bb95b0fcfe9790-rootfs.mount: Deactivated successfully. Oct 29 23:32:46.005591 kubelet[2661]: I1029 23:32:46.005557 2661 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 23:32:46.066991 systemd[1]: Created slice kubepods-besteffort-podaa5b4e14_b80b_4af9_80b3_9f6279c72ae8.slice - libcontainer container kubepods-besteffort-podaa5b4e14_b80b_4af9_80b3_9f6279c72ae8.slice. Oct 29 23:32:46.075659 systemd[1]: Created slice kubepods-burstable-podce73ae69_de96_4cf2_9fd5_5842622760d9.slice - libcontainer container kubepods-burstable-podce73ae69_de96_4cf2_9fd5_5842622760d9.slice. Oct 29 23:32:46.086726 systemd[1]: Created slice kubepods-besteffort-pod628d1c40_ffd8_42f2_bf12_78878eaf122b.slice - libcontainer container kubepods-besteffort-pod628d1c40_ffd8_42f2_bf12_78878eaf122b.slice. Oct 29 23:32:46.094444 systemd[1]: Created slice kubepods-besteffort-pod1e0f27ca_ac8c_40e5_bfd4_e2be1f0ea6af.slice - libcontainer container kubepods-besteffort-pod1e0f27ca_ac8c_40e5_bfd4_e2be1f0ea6af.slice. Oct 29 23:32:46.096070 kubelet[2661]: I1029 23:32:46.096039 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddk2\" (UniqueName: \"kubernetes.io/projected/ce73ae69-de96-4cf2-9fd5-5842622760d9-kube-api-access-cddk2\") pod \"coredns-674b8bbfcf-lx67m\" (UID: \"ce73ae69-de96-4cf2-9fd5-5842622760d9\") " pod="kube-system/coredns-674b8bbfcf-lx67m" Oct 29 23:32:46.096149 kubelet[2661]: I1029 23:32:46.096076 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce73ae69-de96-4cf2-9fd5-5842622760d9-config-volume\") pod \"coredns-674b8bbfcf-lx67m\" (UID: \"ce73ae69-de96-4cf2-9fd5-5842622760d9\") " pod="kube-system/coredns-674b8bbfcf-lx67m" Oct 29 23:32:46.096149 kubelet[2661]: I1029 23:32:46.096096 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw49n\" (UniqueName: \"kubernetes.io/projected/628d1c40-ffd8-42f2-bf12-78878eaf122b-kube-api-access-kw49n\") pod \"calico-kube-controllers-5b49b9964b-6j8qw\" (UID: \"628d1c40-ffd8-42f2-bf12-78878eaf122b\") " pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" Oct 29 23:32:46.096149 kubelet[2661]: I1029 23:32:46.096120 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/628d1c40-ffd8-42f2-bf12-78878eaf122b-tigera-ca-bundle\") pod \"calico-kube-controllers-5b49b9964b-6j8qw\" (UID: \"628d1c40-ffd8-42f2-bf12-78878eaf122b\") " pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" Oct 29 23:32:46.100720 systemd[1]: Created slice kubepods-burstable-pod6057bac1_e88b_43f7_a462_8ee6bc372920.slice - libcontainer container kubepods-burstable-pod6057bac1_e88b_43f7_a462_8ee6bc372920.slice. Oct 29 23:32:46.108377 systemd[1]: Created slice kubepods-besteffort-podb435311e_f7f8_4128_85fc_2874c0aa8c02.slice - libcontainer container kubepods-besteffort-podb435311e_f7f8_4128_85fc_2874c0aa8c02.slice. Oct 29 23:32:46.112869 systemd[1]: Created slice kubepods-besteffort-poddbd38070_358f_48dd_8533_17b5e6ecf6e9.slice - libcontainer container kubepods-besteffort-poddbd38070_358f_48dd_8533_17b5e6ecf6e9.slice. Oct 29 23:32:46.196915 kubelet[2661]: I1029 23:32:46.196874 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af-calico-apiserver-certs\") pod \"calico-apiserver-674644db6-wwszd\" (UID: \"1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af\") " pod="calico-apiserver/calico-apiserver-674644db6-wwszd" Oct 29 23:32:46.196915 kubelet[2661]: I1029 23:32:46.196917 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbd38070-358f-48dd-8533-17b5e6ecf6e9-goldmane-ca-bundle\") pod \"goldmane-666569f655-hrtth\" (UID: \"dbd38070-358f-48dd-8533-17b5e6ecf6e9\") " pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.196915 kubelet[2661]: I1029 23:32:46.196934 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dbd38070-358f-48dd-8533-17b5e6ecf6e9-goldmane-key-pair\") pod \"goldmane-666569f655-hrtth\" (UID: \"dbd38070-358f-48dd-8533-17b5e6ecf6e9\") " pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.197111 kubelet[2661]: I1029 23:32:46.196952 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqd45\" (UniqueName: \"kubernetes.io/projected/b435311e-f7f8-4128-85fc-2874c0aa8c02-kube-api-access-kqd45\") pod \"whisker-6684bc5cfb-bcc7k\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " pod="calico-system/whisker-6684bc5cfb-bcc7k" Oct 29 23:32:46.197111 kubelet[2661]: I1029 23:32:46.196971 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvg6n\" (UniqueName: \"kubernetes.io/projected/aa5b4e14-b80b-4af9-80b3-9f6279c72ae8-kube-api-access-nvg6n\") pod \"calico-apiserver-674644db6-bb7xs\" (UID: \"aa5b4e14-b80b-4af9-80b3-9f6279c72ae8\") " pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" Oct 29 23:32:46.197111 kubelet[2661]: I1029 23:32:46.197002 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jqd\" (UniqueName: \"kubernetes.io/projected/6057bac1-e88b-43f7-a462-8ee6bc372920-kube-api-access-k2jqd\") pod \"coredns-674b8bbfcf-xmtzx\" (UID: \"6057bac1-e88b-43f7-a462-8ee6bc372920\") " pod="kube-system/coredns-674b8bbfcf-xmtzx" Oct 29 23:32:46.197111 kubelet[2661]: I1029 23:32:46.197016 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbd38070-358f-48dd-8533-17b5e6ecf6e9-config\") pod \"goldmane-666569f655-hrtth\" (UID: \"dbd38070-358f-48dd-8533-17b5e6ecf6e9\") " pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.197111 kubelet[2661]: I1029 23:32:46.197039 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa5b4e14-b80b-4af9-80b3-9f6279c72ae8-calico-apiserver-certs\") pod \"calico-apiserver-674644db6-bb7xs\" (UID: \"aa5b4e14-b80b-4af9-80b3-9f6279c72ae8\") " pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" Oct 29 23:32:46.197223 kubelet[2661]: I1029 23:32:46.197061 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-backend-key-pair\") pod \"whisker-6684bc5cfb-bcc7k\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " pod="calico-system/whisker-6684bc5cfb-bcc7k" Oct 29 23:32:46.197223 kubelet[2661]: I1029 23:32:46.197080 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s28g\" (UniqueName: \"kubernetes.io/projected/dbd38070-358f-48dd-8533-17b5e6ecf6e9-kube-api-access-4s28g\") pod \"goldmane-666569f655-hrtth\" (UID: \"dbd38070-358f-48dd-8533-17b5e6ecf6e9\") " pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.197223 kubelet[2661]: I1029 23:32:46.197130 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpx5m\" (UniqueName: \"kubernetes.io/projected/1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af-kube-api-access-mpx5m\") pod \"calico-apiserver-674644db6-wwszd\" (UID: \"1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af\") " pod="calico-apiserver/calico-apiserver-674644db6-wwszd" Oct 29 23:32:46.197223 kubelet[2661]: I1029 23:32:46.197151 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6057bac1-e88b-43f7-a462-8ee6bc372920-config-volume\") pod \"coredns-674b8bbfcf-xmtzx\" (UID: \"6057bac1-e88b-43f7-a462-8ee6bc372920\") " pod="kube-system/coredns-674b8bbfcf-xmtzx" Oct 29 23:32:46.197223 kubelet[2661]: I1029 23:32:46.197166 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-ca-bundle\") pod \"whisker-6684bc5cfb-bcc7k\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " pod="calico-system/whisker-6684bc5cfb-bcc7k" Oct 29 23:32:46.376267 containerd[1503]: time="2025-10-29T23:32:46.376205833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-bb7xs,Uid:aa5b4e14-b80b-4af9-80b3-9f6279c72ae8,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:46.379676 containerd[1503]: time="2025-10-29T23:32:46.379509344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lx67m,Uid:ce73ae69-de96-4cf2-9fd5-5842622760d9,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:46.388594 systemd[1]: Created slice kubepods-besteffort-pod1344333c_5c8b_4908_b272_45117ec8f68a.slice - libcontainer container kubepods-besteffort-pod1344333c_5c8b_4908_b272_45117ec8f68a.slice. Oct 29 23:32:46.392514 containerd[1503]: time="2025-10-29T23:32:46.392464892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jrl9,Uid:1344333c-5c8b-4908-b272-45117ec8f68a,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:46.392830 containerd[1503]: time="2025-10-29T23:32:46.392803925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b49b9964b-6j8qw,Uid:628d1c40-ffd8-42f2-bf12-78878eaf122b,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:46.399273 containerd[1503]: time="2025-10-29T23:32:46.398092223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-wwszd,Uid:1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:46.406389 containerd[1503]: time="2025-10-29T23:32:46.406341038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmtzx,Uid:6057bac1-e88b-43f7-a462-8ee6bc372920,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:46.413269 containerd[1503]: time="2025-10-29T23:32:46.412754738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6684bc5cfb-bcc7k,Uid:b435311e-f7f8-4128-85fc-2874c0aa8c02,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:46.420413 containerd[1503]: time="2025-10-29T23:32:46.418153420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hrtth,Uid:dbd38070-358f-48dd-8533-17b5e6ecf6e9,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:46.507610 containerd[1503]: time="2025-10-29T23:32:46.507572381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 23:32:46.545469 containerd[1503]: time="2025-10-29T23:32:46.545403562Z" level=error msg="Failed to destroy network for sandbox \"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.546013 containerd[1503]: time="2025-10-29T23:32:46.545970844Z" level=error msg="Failed to destroy network for sandbox \"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.546927 containerd[1503]: time="2025-10-29T23:32:46.546894363Z" level=error msg="Failed to destroy network for sandbox \"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.548296 containerd[1503]: time="2025-10-29T23:32:46.548172238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b49b9964b-6j8qw,Uid:628d1c40-ffd8-42f2-bf12-78878eaf122b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.549514 kubelet[2661]: E1029 23:32:46.549473 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.549694 kubelet[2661]: E1029 23:32:46.549676 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" Oct 29 23:32:46.550130 kubelet[2661]: E1029 23:32:46.549806 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" Oct 29 23:32:46.550130 kubelet[2661]: E1029 23:32:46.549879 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b49b9964b-6j8qw_calico-system(628d1c40-ffd8-42f2-bf12-78878eaf122b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b49b9964b-6j8qw_calico-system(628d1c40-ffd8-42f2-bf12-78878eaf122b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f8feefaf59521cf1341d5ab9357d03fd9be8619a336739ec011b4fac86dd111\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:32:46.550763 containerd[1503]: time="2025-10-29T23:32:46.549303481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lx67m,Uid:ce73ae69-de96-4cf2-9fd5-5842622760d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.551022 containerd[1503]: time="2025-10-29T23:32:46.549505685Z" level=error msg="Failed to destroy network for sandbox \"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.551892 containerd[1503]: time="2025-10-29T23:32:46.550058884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-bb7xs,Uid:aa5b4e14-b80b-4af9-80b3-9f6279c72ae8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.552140 containerd[1503]: time="2025-10-29T23:32:46.552102083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-wwszd,Uid:1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.552227 kubelet[2661]: E1029 23:32:46.552112 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.552227 kubelet[2661]: E1029 23:32:46.552174 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" Oct 29 23:32:46.552227 kubelet[2661]: E1029 23:32:46.552193 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" Oct 29 23:32:46.552341 kubelet[2661]: E1029 23:32:46.552235 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674644db6-bb7xs_calico-apiserver(aa5b4e14-b80b-4af9-80b3-9f6279c72ae8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674644db6-bb7xs_calico-apiserver(aa5b4e14-b80b-4af9-80b3-9f6279c72ae8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d2bc562b9b194328317239ddf966b3d2c65bc1de12c2172245e63e697bd7350\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:32:46.552341 kubelet[2661]: E1029 23:32:46.552286 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.552341 kubelet[2661]: E1029 23:32:46.552302 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lx67m" Oct 29 23:32:46.552435 kubelet[2661]: E1029 23:32:46.552314 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lx67m" Oct 29 23:32:46.552435 kubelet[2661]: E1029 23:32:46.552348 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lx67m_kube-system(ce73ae69-de96-4cf2-9fd5-5842622760d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lx67m_kube-system(ce73ae69-de96-4cf2-9fd5-5842622760d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd998e1c6c0efd9d35c087c047f07bdd4022158207482fcb28bf1aca18db8fbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lx67m" podUID="ce73ae69-de96-4cf2-9fd5-5842622760d9" Oct 29 23:32:46.553084 kubelet[2661]: E1029 23:32:46.553025 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.553084 kubelet[2661]: E1029 23:32:46.553073 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" Oct 29 23:32:46.553084 kubelet[2661]: E1029 23:32:46.553088 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" Oct 29 23:32:46.553211 kubelet[2661]: E1029 23:32:46.553129 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-674644db6-wwszd_calico-apiserver(1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-674644db6-wwszd_calico-apiserver(1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8541d56011a00b953e6b862a7c958841ef16dd543ecb41e19882809c45173f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:32:46.554180 containerd[1503]: time="2025-10-29T23:32:46.554078749Z" level=error msg="Failed to destroy network for sandbox \"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.555023 containerd[1503]: time="2025-10-29T23:32:46.554989945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jrl9,Uid:1344333c-5c8b-4908-b272-45117ec8f68a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.555465 kubelet[2661]: E1029 23:32:46.555308 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.555465 kubelet[2661]: E1029 23:32:46.555352 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:46.555465 kubelet[2661]: E1029 23:32:46.555371 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jrl9" Oct 29 23:32:46.555597 kubelet[2661]: E1029 23:32:46.555416 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d954482e541c7c16d32e2f85c2617768e8bb3e98653ed4ba22b6981dd1daa452\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:32:46.556811 containerd[1503]: time="2025-10-29T23:32:46.556781530Z" level=error msg="Failed to destroy network for sandbox \"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.559139 containerd[1503]: time="2025-10-29T23:32:46.559052579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmtzx,Uid:6057bac1-e88b-43f7-a462-8ee6bc372920,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.560284 kubelet[2661]: E1029 23:32:46.559408 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.560284 kubelet[2661]: E1029 23:32:46.559535 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmtzx" Oct 29 23:32:46.560284 kubelet[2661]: E1029 23:32:46.559571 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xmtzx" Oct 29 23:32:46.560411 kubelet[2661]: E1029 23:32:46.559608 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xmtzx_kube-system(6057bac1-e88b-43f7-a462-8ee6bc372920)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xmtzx_kube-system(6057bac1-e88b-43f7-a462-8ee6bc372920)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84713253050d34df6e12dfc7996ce4ec2d7e34f7842799f5038ae2320dfe1435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xmtzx" podUID="6057bac1-e88b-43f7-a462-8ee6bc372920" Oct 29 23:32:46.561095 containerd[1503]: time="2025-10-29T23:32:46.561049529Z" level=error msg="Failed to destroy network for sandbox \"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.564338 containerd[1503]: time="2025-10-29T23:32:46.564291546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6684bc5cfb-bcc7k,Uid:b435311e-f7f8-4128-85fc-2874c0aa8c02,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.565619 kubelet[2661]: E1029 23:32:46.565449 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.565619 kubelet[2661]: E1029 23:32:46.565511 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6684bc5cfb-bcc7k" Oct 29 23:32:46.565619 kubelet[2661]: E1029 23:32:46.565529 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6684bc5cfb-bcc7k" Oct 29 23:32:46.565742 kubelet[2661]: E1029 23:32:46.565579 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6684bc5cfb-bcc7k_calico-system(b435311e-f7f8-4128-85fc-2874c0aa8c02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6684bc5cfb-bcc7k_calico-system(b435311e-f7f8-4128-85fc-2874c0aa8c02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a60e8b928e6cc38a600788887d4f60113c46c7db091d9534bf66b51af2b7950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6684bc5cfb-bcc7k" podUID="b435311e-f7f8-4128-85fc-2874c0aa8c02" Oct 29 23:32:46.568254 containerd[1503]: time="2025-10-29T23:32:46.568201588Z" level=error msg="Failed to destroy network for sandbox \"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.569338 containerd[1503]: time="2025-10-29T23:32:46.569304545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hrtth,Uid:dbd38070-358f-48dd-8533-17b5e6ecf6e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.569570 kubelet[2661]: E1029 23:32:46.569532 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:46.569612 kubelet[2661]: E1029 23:32:46.569588 2661 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.569642 kubelet[2661]: E1029 23:32:46.569609 2661 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hrtth" Oct 29 23:32:46.569681 kubelet[2661]: E1029 23:32:46.569657 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hrtth_calico-system(dbd38070-358f-48dd-8533-17b5e6ecf6e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hrtth_calico-system(dbd38070-358f-48dd-8533-17b5e6ecf6e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96c9a9b2b5fc41b0e18fe13a6128e2d4f61a60e84ecb529bffdbd68c940849d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:32:47.326288 systemd[1]: run-netns-cni\x2d11ae773f\x2d6724\x2d4584\x2dd23e\x2d80b5c1895223.mount: Deactivated successfully. Oct 29 23:32:47.326453 systemd[1]: run-netns-cni\x2dcdb20b19\x2d7e7f\x2dc5cf\x2d4c01\x2d54ccdaab0a1e.mount: Deactivated successfully. Oct 29 23:32:47.326502 systemd[1]: run-netns-cni\x2db367afdf\x2da699\x2de4c3\x2deed5\x2d205bb5b8bb46.mount: Deactivated successfully. Oct 29 23:32:47.326547 systemd[1]: run-netns-cni\x2da33c33b5\x2d5ae7\x2d3db6\x2d8e1b\x2dd26ba7cfec2f.mount: Deactivated successfully. Oct 29 23:32:49.847380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995392550.mount: Deactivated successfully. Oct 29 23:32:50.175655 containerd[1503]: time="2025-10-29T23:32:50.175521057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:50.190534 containerd[1503]: time="2025-10-29T23:32:50.176035835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 29 23:32:50.190534 containerd[1503]: time="2025-10-29T23:32:50.179666043Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:50.190681 containerd[1503]: time="2025-10-29T23:32:50.186423686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.678383764s" Oct 29 23:32:50.190681 containerd[1503]: time="2025-10-29T23:32:50.190626723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 29 23:32:50.191352 containerd[1503]: time="2025-10-29T23:32:50.191310453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:50.235905 containerd[1503]: time="2025-10-29T23:32:50.233094821Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 23:32:50.245188 containerd[1503]: time="2025-10-29T23:32:50.245135066Z" level=info msg="Container 2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:50.255316 containerd[1503]: time="2025-10-29T23:32:50.255278031Z" level=info msg="CreateContainer within sandbox \"0f51eca4b61d5e5fad91966b4baa97ef25e33b67de09f5fe683ccae29b87889c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\"" Oct 29 23:32:50.256515 containerd[1503]: time="2025-10-29T23:32:50.256442852Z" level=info msg="StartContainer for \"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\"" Oct 29 23:32:50.257928 containerd[1503]: time="2025-10-29T23:32:50.257902248Z" level=info msg="connecting to shim 2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3" address="unix:///run/containerd/s/7daf2f7d543404fd586607ec5fc843defca70e3de7b883b8c82894b11345ea19" protocol=ttrpc version=3 Oct 29 23:32:50.297426 systemd[1]: Started cri-containerd-2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3.scope - libcontainer container 2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3. Oct 29 23:32:50.339279 containerd[1503]: time="2025-10-29T23:32:50.339207716Z" level=info msg="StartContainer for \"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\" returns successfully" Oct 29 23:32:50.455145 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 23:32:50.455344 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 23:32:50.543032 kubelet[2661]: I1029 23:32:50.542955 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-blqxz" podStartSLOduration=1.469744982 podStartE2EDuration="13.542939813s" podCreationTimestamp="2025-10-29 23:32:37 +0000 UTC" firstStartedPulling="2025-10-29 23:32:38.1183141 +0000 UTC m=+24.837778540" lastFinishedPulling="2025-10-29 23:32:50.191508931 +0000 UTC m=+36.910973371" observedRunningTime="2025-10-29 23:32:50.538001636 +0000 UTC m=+37.257466076" watchObservedRunningTime="2025-10-29 23:32:50.542939813 +0000 UTC m=+37.262404253" Oct 29 23:32:50.628444 kubelet[2661]: I1029 23:32:50.628382 2661 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-ca-bundle\") pod \"b435311e-f7f8-4128-85fc-2874c0aa8c02\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " Oct 29 23:32:50.628444 kubelet[2661]: I1029 23:32:50.628431 2661 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-backend-key-pair\") pod \"b435311e-f7f8-4128-85fc-2874c0aa8c02\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " Oct 29 23:32:50.628623 kubelet[2661]: I1029 23:32:50.628461 2661 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqd45\" (UniqueName: \"kubernetes.io/projected/b435311e-f7f8-4128-85fc-2874c0aa8c02-kube-api-access-kqd45\") pod \"b435311e-f7f8-4128-85fc-2874c0aa8c02\" (UID: \"b435311e-f7f8-4128-85fc-2874c0aa8c02\") " Oct 29 23:32:50.635203 kubelet[2661]: I1029 23:32:50.635142 2661 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b435311e-f7f8-4128-85fc-2874c0aa8c02" (UID: "b435311e-f7f8-4128-85fc-2874c0aa8c02"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 23:32:50.639747 kubelet[2661]: I1029 23:32:50.639693 2661 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b435311e-f7f8-4128-85fc-2874c0aa8c02" (UID: "b435311e-f7f8-4128-85fc-2874c0aa8c02"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 23:32:50.640455 kubelet[2661]: I1029 23:32:50.640412 2661 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b435311e-f7f8-4128-85fc-2874c0aa8c02-kube-api-access-kqd45" (OuterVolumeSpecName: "kube-api-access-kqd45") pod "b435311e-f7f8-4128-85fc-2874c0aa8c02" (UID: "b435311e-f7f8-4128-85fc-2874c0aa8c02"). InnerVolumeSpecName "kube-api-access-kqd45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 23:32:50.729679 kubelet[2661]: I1029 23:32:50.729637 2661 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqd45\" (UniqueName: \"kubernetes.io/projected/b435311e-f7f8-4128-85fc-2874c0aa8c02-kube-api-access-kqd45\") on node \"localhost\" DevicePath \"\"" Oct 29 23:32:50.729679 kubelet[2661]: I1029 23:32:50.729672 2661 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 29 23:32:50.729679 kubelet[2661]: I1029 23:32:50.729682 2661 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b435311e-f7f8-4128-85fc-2874c0aa8c02-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 29 23:32:50.848206 systemd[1]: var-lib-kubelet-pods-b435311e\x2df7f8\x2d4128\x2d85fc\x2d2874c0aa8c02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqd45.mount: Deactivated successfully. Oct 29 23:32:50.848669 systemd[1]: var-lib-kubelet-pods-b435311e\x2df7f8\x2d4128\x2d85fc\x2d2874c0aa8c02-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 23:32:51.382775 systemd[1]: Removed slice kubepods-besteffort-podb435311e_f7f8_4128_85fc_2874c0aa8c02.slice - libcontainer container kubepods-besteffort-podb435311e_f7f8_4128_85fc_2874c0aa8c02.slice. Oct 29 23:32:51.519812 kubelet[2661]: I1029 23:32:51.519754 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:51.579746 systemd[1]: Created slice kubepods-besteffort-podf5482b89_57c5_421d_ae18_209fa743d482.slice - libcontainer container kubepods-besteffort-podf5482b89_57c5_421d_ae18_209fa743d482.slice. Oct 29 23:32:51.634937 kubelet[2661]: I1029 23:32:51.634800 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdhg\" (UniqueName: \"kubernetes.io/projected/f5482b89-57c5-421d-ae18-209fa743d482-kube-api-access-zhdhg\") pod \"whisker-68d6cc8c8-bzrtz\" (UID: \"f5482b89-57c5-421d-ae18-209fa743d482\") " pod="calico-system/whisker-68d6cc8c8-bzrtz" Oct 29 23:32:51.634937 kubelet[2661]: I1029 23:32:51.634859 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5482b89-57c5-421d-ae18-209fa743d482-whisker-backend-key-pair\") pod \"whisker-68d6cc8c8-bzrtz\" (UID: \"f5482b89-57c5-421d-ae18-209fa743d482\") " pod="calico-system/whisker-68d6cc8c8-bzrtz" Oct 29 23:32:51.634937 kubelet[2661]: I1029 23:32:51.634892 2661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5482b89-57c5-421d-ae18-209fa743d482-whisker-ca-bundle\") pod \"whisker-68d6cc8c8-bzrtz\" (UID: \"f5482b89-57c5-421d-ae18-209fa743d482\") " pod="calico-system/whisker-68d6cc8c8-bzrtz" Oct 29 23:32:51.884597 containerd[1503]: time="2025-10-29T23:32:51.884540446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68d6cc8c8-bzrtz,Uid:f5482b89-57c5-421d-ae18-209fa743d482,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:52.083373 systemd-networkd[1435]: cali92d6ad2721b: Link UP Oct 29 23:32:52.083595 systemd-networkd[1435]: cali92d6ad2721b: Gained carrier Oct 29 23:32:52.099435 containerd[1503]: 2025-10-29 23:32:51.923 [INFO][3916] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:52.099435 containerd[1503]: 2025-10-29 23:32:51.959 [INFO][3916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0 whisker-68d6cc8c8- calico-system f5482b89-57c5-421d-ae18-209fa743d482 905 0 2025-10-29 23:32:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68d6cc8c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68d6cc8c8-bzrtz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali92d6ad2721b [] [] }} ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-" Oct 29 23:32:52.099435 containerd[1503]: 2025-10-29 23:32:51.959 [INFO][3916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.099435 containerd[1503]: 2025-10-29 23:32:52.033 [INFO][3931] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" HandleID="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Workload="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.034 [INFO][3931] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" HandleID="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Workload="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000504bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68d6cc8c8-bzrtz", "timestamp":"2025-10-29 23:32:52.03387209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.034 [INFO][3931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.034 [INFO][3931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.034 [INFO][3931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.046 [INFO][3931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" host="localhost" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.053 [INFO][3931] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.058 [INFO][3931] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.060 [INFO][3931] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.062 [INFO][3931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:52.099678 containerd[1503]: 2025-10-29 23:32:52.063 [INFO][3931] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" host="localhost" Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.065 [INFO][3931] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.068 [INFO][3931] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" host="localhost" Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.073 [INFO][3931] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" host="localhost" Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.073 [INFO][3931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" host="localhost" Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.073 [INFO][3931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:52.099937 containerd[1503]: 2025-10-29 23:32:52.073 [INFO][3931] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" HandleID="k8s-pod-network.70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Workload="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.100065 containerd[1503]: 2025-10-29 23:32:52.076 [INFO][3916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0", GenerateName:"whisker-68d6cc8c8-", Namespace:"calico-system", SelfLink:"", UID:"f5482b89-57c5-421d-ae18-209fa743d482", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68d6cc8c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68d6cc8c8-bzrtz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali92d6ad2721b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:52.100065 containerd[1503]: 2025-10-29 23:32:52.076 [INFO][3916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.100181 containerd[1503]: 2025-10-29 23:32:52.076 [INFO][3916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92d6ad2721b ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.100181 containerd[1503]: 2025-10-29 23:32:52.084 [INFO][3916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.100221 containerd[1503]: 2025-10-29 23:32:52.084 [INFO][3916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0", GenerateName:"whisker-68d6cc8c8-", Namespace:"calico-system", SelfLink:"", UID:"f5482b89-57c5-421d-ae18-209fa743d482", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68d6cc8c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e", Pod:"whisker-68d6cc8c8-bzrtz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali92d6ad2721b", MAC:"1a:1b:d0:b9:e6:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:52.100295 containerd[1503]: 2025-10-29 23:32:52.097 [INFO][3916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" Namespace="calico-system" Pod="whisker-68d6cc8c8-bzrtz" WorkloadEndpoint="localhost-k8s-whisker--68d6cc8c8--bzrtz-eth0" Oct 29 23:32:52.163387 containerd[1503]: time="2025-10-29T23:32:52.163346263Z" level=info msg="connecting to shim 70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e" address="unix:///run/containerd/s/9f774e2c908b4cde9aeb0ecc6b0c2d3c23a05961287aee9ac5d3c66e4edfb87b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:52.195425 systemd[1]: Started cri-containerd-70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e.scope - libcontainer container 70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e. Oct 29 23:32:52.207031 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:52.242272 containerd[1503]: time="2025-10-29T23:32:52.242188799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68d6cc8c8-bzrtz,Uid:f5482b89-57c5-421d-ae18-209fa743d482,Namespace:calico-system,Attempt:0,} returns sandbox id \"70eb168fe73322ae26893e0e77191bb0f4d61544be983f62923da942497bb27e\"" Oct 29 23:32:52.243779 containerd[1503]: time="2025-10-29T23:32:52.243720674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:32:52.468231 containerd[1503]: time="2025-10-29T23:32:52.468101382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:52.469811 containerd[1503]: time="2025-10-29T23:32:52.469767401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:32:52.469811 containerd[1503]: time="2025-10-29T23:32:52.469839414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:32:52.471915 kubelet[2661]: E1029 23:32:52.471869 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:52.473286 kubelet[2661]: E1029 23:32:52.473259 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:52.485798 kubelet[2661]: E1029 23:32:52.485738 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c7cbba715eb34548b2bdb54b4c82c9c6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:52.487663 containerd[1503]: time="2025-10-29T23:32:52.487571673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:32:52.708909 containerd[1503]: time="2025-10-29T23:32:52.708845985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:52.709920 containerd[1503]: time="2025-10-29T23:32:52.709881931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:32:52.710003 containerd[1503]: time="2025-10-29T23:32:52.709968666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:52.710152 kubelet[2661]: E1029 23:32:52.710101 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:52.710477 kubelet[2661]: E1029 23:32:52.710164 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:52.710507 kubelet[2661]: E1029 23:32:52.710320 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:52.711628 kubelet[2661]: E1029 23:32:52.711563 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68d6cc8c8-bzrtz" podUID="f5482b89-57c5-421d-ae18-209fa743d482" Oct 29 23:32:53.379647 kubelet[2661]: I1029 23:32:53.379592 2661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b435311e-f7f8-4128-85fc-2874c0aa8c02" path="/var/lib/kubelet/pods/b435311e-f7f8-4128-85fc-2874c0aa8c02/volumes" Oct 29 23:32:53.526159 kubelet[2661]: E1029 23:32:53.526083 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68d6cc8c8-bzrtz" podUID="f5482b89-57c5-421d-ae18-209fa743d482" Oct 29 23:32:54.105440 systemd-networkd[1435]: cali92d6ad2721b: Gained IPv6LL Oct 29 23:32:56.897954 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:35306.service - OpenSSH per-connection server daemon (10.0.0.1:35306). Oct 29 23:32:56.952822 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 35306 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:32:56.954456 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:56.958773 systemd-logind[1486]: New session 8 of user core. Oct 29 23:32:56.967450 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 23:32:57.121282 sshd[4096]: Connection closed by 10.0.0.1 port 35306 Oct 29 23:32:57.121793 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:57.125344 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:35306.service: Deactivated successfully. Oct 29 23:32:57.126966 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 23:32:57.127673 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Oct 29 23:32:57.128816 systemd-logind[1486]: Removed session 8. Oct 29 23:32:57.377984 containerd[1503]: time="2025-10-29T23:32:57.377928427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hrtth,Uid:dbd38070-358f-48dd-8533-17b5e6ecf6e9,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:57.378906 containerd[1503]: time="2025-10-29T23:32:57.377928427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-bb7xs,Uid:aa5b4e14-b80b-4af9-80b3-9f6279c72ae8,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:57.523900 systemd-networkd[1435]: cali71b54a40b88: Link UP Oct 29 23:32:57.524460 systemd-networkd[1435]: cali71b54a40b88: Gained carrier Oct 29 23:32:57.539942 containerd[1503]: 2025-10-29 23:32:57.423 [INFO][4144] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:57.539942 containerd[1503]: 2025-10-29 23:32:57.439 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0 calico-apiserver-674644db6- calico-apiserver aa5b4e14-b80b-4af9-80b3-9f6279c72ae8 839 0 2025-10-29 23:32:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674644db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-674644db6-bb7xs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali71b54a40b88 [] [] }} ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-" Oct 29 23:32:57.539942 containerd[1503]: 2025-10-29 23:32:57.440 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.539942 containerd[1503]: 2025-10-29 23:32:57.472 [INFO][4162] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" HandleID="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Workload="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.472 [INFO][4162] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" HandleID="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Workload="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-674644db6-bb7xs", "timestamp":"2025-10-29 23:32:57.472605699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.472 [INFO][4162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.472 [INFO][4162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.472 [INFO][4162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.483 [INFO][4162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" host="localhost" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.488 [INFO][4162] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.496 [INFO][4162] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.498 [INFO][4162] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.501 [INFO][4162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:57.540184 containerd[1503]: 2025-10-29 23:32:57.501 [INFO][4162] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" host="localhost" Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.502 [INFO][4162] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3 Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.507 [INFO][4162] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" host="localhost" Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4162] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" host="localhost" Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" host="localhost" Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:57.540409 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4162] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" HandleID="k8s-pod-network.d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Workload="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.540527 containerd[1503]: 2025-10-29 23:32:57.520 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0", GenerateName:"calico-apiserver-674644db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5b4e14-b80b-4af9-80b3-9f6279c72ae8", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674644db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-674644db6-bb7xs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b54a40b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:57.540576 containerd[1503]: 2025-10-29 23:32:57.520 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.540576 containerd[1503]: 2025-10-29 23:32:57.520 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71b54a40b88 ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.540576 containerd[1503]: 2025-10-29 23:32:57.524 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.540639 containerd[1503]: 2025-10-29 23:32:57.524 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0", GenerateName:"calico-apiserver-674644db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa5b4e14-b80b-4af9-80b3-9f6279c72ae8", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674644db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3", Pod:"calico-apiserver-674644db6-bb7xs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71b54a40b88", MAC:"02:8a:0f:60:26:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:57.540683 containerd[1503]: 2025-10-29 23:32:57.536 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-bb7xs" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--bb7xs-eth0" Oct 29 23:32:57.565450 containerd[1503]: time="2025-10-29T23:32:57.565382470Z" level=info msg="connecting to shim d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3" address="unix:///run/containerd/s/a664fb10a97e56cd4362c55d0b6e1eeda2f3657cc417d67f4982fbe599f80e14" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:57.610500 systemd[1]: Started cri-containerd-d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3.scope - libcontainer container d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3. Oct 29 23:32:57.649960 systemd-networkd[1435]: cali9cc17b3869b: Link UP Oct 29 23:32:57.650384 systemd-networkd[1435]: cali9cc17b3869b: Gained carrier Oct 29 23:32:57.667510 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:57.669573 containerd[1503]: 2025-10-29 23:32:57.424 [INFO][4132] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:57.669573 containerd[1503]: 2025-10-29 23:32:57.445 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--hrtth-eth0 goldmane-666569f655- calico-system dbd38070-358f-48dd-8533-17b5e6ecf6e9 846 0 2025-10-29 23:32:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-hrtth eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9cc17b3869b [] [] }} ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-" Oct 29 23:32:57.669573 containerd[1503]: 2025-10-29 23:32:57.445 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.669573 containerd[1503]: 2025-10-29 23:32:57.473 [INFO][4164] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" HandleID="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Workload="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.474 [INFO][4164] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" HandleID="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Workload="localhost-k8s-goldmane--666569f655--hrtth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-hrtth", "timestamp":"2025-10-29 23:32:57.473916947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.474 [INFO][4164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.515 [INFO][4164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.585 [INFO][4164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" host="localhost" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.599 [INFO][4164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.612 [INFO][4164] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.615 [INFO][4164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.619 [INFO][4164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:32:57.669930 containerd[1503]: 2025-10-29 23:32:57.619 [INFO][4164] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" host="localhost" Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.622 [INFO][4164] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.628 [INFO][4164] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" host="localhost" Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.636 [INFO][4164] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" host="localhost" Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.637 [INFO][4164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" host="localhost" Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.638 [INFO][4164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:57.670229 containerd[1503]: 2025-10-29 23:32:57.638 [INFO][4164] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" HandleID="k8s-pod-network.681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Workload="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.670478 containerd[1503]: 2025-10-29 23:32:57.642 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hrtth-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"dbd38070-358f-48dd-8533-17b5e6ecf6e9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-hrtth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cc17b3869b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:57.670478 containerd[1503]: 2025-10-29 23:32:57.642 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.670609 containerd[1503]: 2025-10-29 23:32:57.642 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cc17b3869b ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.670609 containerd[1503]: 2025-10-29 23:32:57.651 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.670671 containerd[1503]: 2025-10-29 23:32:57.652 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--hrtth-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"dbd38070-358f-48dd-8533-17b5e6ecf6e9", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea", Pod:"goldmane-666569f655-hrtth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9cc17b3869b", MAC:"56:81:ad:d3:d9:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:57.670739 containerd[1503]: 2025-10-29 23:32:57.666 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" Namespace="calico-system" Pod="goldmane-666569f655-hrtth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--hrtth-eth0" Oct 29 23:32:57.702191 containerd[1503]: time="2025-10-29T23:32:57.701860762Z" level=info msg="connecting to shim 681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea" address="unix:///run/containerd/s/e25fa8774d9f0a756bc38c396aaeb434b2274dd18298023e65561d11bba60659" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:57.715167 containerd[1503]: time="2025-10-29T23:32:57.715119821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-bb7xs,Uid:aa5b4e14-b80b-4af9-80b3-9f6279c72ae8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d572ef0c1dad5af1fcd25e0ccc859efebe73721dcdc76f4af4425632171982f3\"" Oct 29 23:32:57.717362 containerd[1503]: time="2025-10-29T23:32:57.717316689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:57.747459 systemd[1]: Started cri-containerd-681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea.scope - libcontainer container 681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea. Oct 29 23:32:57.759137 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:32:57.784692 containerd[1503]: time="2025-10-29T23:32:57.784630709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hrtth,Uid:dbd38070-358f-48dd-8533-17b5e6ecf6e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"681694c2d5584c1cd2916b6b3b7152775e1f3f02039fb5183394b40681086aea\"" Oct 29 23:32:57.864591 kubelet[2661]: I1029 23:32:57.864521 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:57.924213 containerd[1503]: time="2025-10-29T23:32:57.924076470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:57.926631 containerd[1503]: time="2025-10-29T23:32:57.926588068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:57.926705 containerd[1503]: time="2025-10-29T23:32:57.926689003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:57.927112 kubelet[2661]: E1029 23:32:57.926919 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:57.927112 kubelet[2661]: E1029 23:32:57.926972 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:57.927304 kubelet[2661]: E1029 23:32:57.927228 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-bb7xs_calico-apiserver(aa5b4e14-b80b-4af9-80b3-9f6279c72ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:57.927533 containerd[1503]: time="2025-10-29T23:32:57.927491891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:32:57.928891 kubelet[2661]: E1029 23:32:57.928848 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:32:57.987804 containerd[1503]: time="2025-10-29T23:32:57.987730829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\" id:\"9d049c1166b056b650117f7dc9250fb8c42524886f581bff1d1738196a7e03a4\" pid:4297 exit_status:1 exited_at:{seconds:1761780777 nanos:987376693}" Oct 29 23:32:58.082116 containerd[1503]: time="2025-10-29T23:32:58.082070010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\" id:\"22939d399d031d3ba46de4d5e1fb1b2648b09df0f57fe02818d630ddc7ccdebd\" pid:4323 exit_status:1 exited_at:{seconds:1761780778 nanos:81749001}" Oct 29 23:32:58.131274 containerd[1503]: time="2025-10-29T23:32:58.131189020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:58.132156 containerd[1503]: time="2025-10-29T23:32:58.132101681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:32:58.132231 containerd[1503]: time="2025-10-29T23:32:58.132162650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:58.132432 kubelet[2661]: E1029 23:32:58.132388 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:58.132479 kubelet[2661]: E1029 23:32:58.132446 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:58.132655 kubelet[2661]: E1029 23:32:58.132601 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s28g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hrtth_calico-system(dbd38070-358f-48dd-8533-17b5e6ecf6e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:58.134830 kubelet[2661]: E1029 23:32:58.134745 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:32:58.546611 kubelet[2661]: E1029 23:32:58.546560 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:32:58.549521 kubelet[2661]: E1029 23:32:58.549479 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:32:58.577480 kubelet[2661]: I1029 23:32:58.577433 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 23:32:59.225463 systemd-networkd[1435]: cali71b54a40b88: Gained IPv6LL Oct 29 23:32:59.225861 systemd-networkd[1435]: cali9cc17b3869b: Gained IPv6LL Oct 29 23:32:59.374067 systemd-networkd[1435]: vxlan.calico: Link UP Oct 29 23:32:59.374081 systemd-networkd[1435]: vxlan.calico: Gained carrier Oct 29 23:32:59.550918 kubelet[2661]: E1029 23:32:59.550483 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:32:59.550918 kubelet[2661]: E1029 23:32:59.550539 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:33:00.377848 containerd[1503]: time="2025-10-29T23:33:00.377794956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lx67m,Uid:ce73ae69-de96-4cf2-9fd5-5842622760d9,Namespace:kube-system,Attempt:0,}" Oct 29 23:33:00.525575 systemd-networkd[1435]: cali1f5e79e70f2: Link UP Oct 29 23:33:00.525822 systemd-networkd[1435]: cali1f5e79e70f2: Gained carrier Oct 29 23:33:00.542617 containerd[1503]: 2025-10-29 23:33:00.429 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lx67m-eth0 coredns-674b8bbfcf- kube-system ce73ae69-de96-4cf2-9fd5-5842622760d9 844 0 2025-10-29 23:32:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lx67m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f5e79e70f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-" Oct 29 23:33:00.542617 containerd[1503]: 2025-10-29 23:33:00.430 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.542617 containerd[1503]: 2025-10-29 23:33:00.455 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" HandleID="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Workload="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.455 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" HandleID="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Workload="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lx67m", "timestamp":"2025-10-29 23:33:00.455565599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.455 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.455 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.456 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.467 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" host="localhost" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.472 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.478 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.480 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.483 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:00.543233 containerd[1503]: 2025-10-29 23:33:00.483 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" host="localhost" Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.485 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30 Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.498 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" host="localhost" Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.520 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" host="localhost" Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.520 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" host="localhost" Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.520 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:33:00.543592 containerd[1503]: 2025-10-29 23:33:00.521 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" HandleID="k8s-pod-network.453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Workload="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.543701 containerd[1503]: 2025-10-29 23:33:00.523 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lx67m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce73ae69-de96-4cf2-9fd5-5842622760d9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lx67m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f5e79e70f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:00.543910 containerd[1503]: 2025-10-29 23:33:00.523 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.543910 containerd[1503]: 2025-10-29 23:33:00.523 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f5e79e70f2 ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.543910 containerd[1503]: 2025-10-29 23:33:00.526 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.544009 containerd[1503]: 2025-10-29 23:33:00.526 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lx67m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce73ae69-de96-4cf2-9fd5-5842622760d9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30", Pod:"coredns-674b8bbfcf-lx67m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f5e79e70f2", MAC:"c2:f5:03:b0:ea:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:00.544009 containerd[1503]: 2025-10-29 23:33:00.539 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" Namespace="kube-system" Pod="coredns-674b8bbfcf-lx67m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lx67m-eth0" Oct 29 23:33:00.568011 containerd[1503]: time="2025-10-29T23:33:00.567492279Z" level=info msg="connecting to shim 453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30" address="unix:///run/containerd/s/6d3a7721215be80f22a0b0e5d60027233467b51993bca2b09fad647df196e89b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:33:00.596644 systemd[1]: Started cri-containerd-453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30.scope - libcontainer container 453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30. Oct 29 23:33:00.608284 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:33:00.629685 containerd[1503]: time="2025-10-29T23:33:00.629570389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lx67m,Uid:ce73ae69-de96-4cf2-9fd5-5842622760d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30\"" Oct 29 23:33:00.641019 containerd[1503]: time="2025-10-29T23:33:00.640972444Z" level=info msg="CreateContainer within sandbox \"453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:33:00.666128 containerd[1503]: time="2025-10-29T23:33:00.665425479Z" level=info msg="Container b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:33:00.670649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610204739.mount: Deactivated successfully. Oct 29 23:33:00.673363 containerd[1503]: time="2025-10-29T23:33:00.673283127Z" level=info msg="CreateContainer within sandbox \"453639796e7334ee03e6ec9fc95a4fefe33498ef274b7164cca7adee7f591e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5\"" Oct 29 23:33:00.674264 containerd[1503]: time="2025-10-29T23:33:00.674116971Z" level=info msg="StartContainer for \"b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5\"" Oct 29 23:33:00.676541 containerd[1503]: time="2025-10-29T23:33:00.676508927Z" level=info msg="connecting to shim b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5" address="unix:///run/containerd/s/6d3a7721215be80f22a0b0e5d60027233467b51993bca2b09fad647df196e89b" protocol=ttrpc version=3 Oct 29 23:33:00.703461 systemd[1]: Started cri-containerd-b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5.scope - libcontainer container b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5. Oct 29 23:33:00.732230 containerd[1503]: time="2025-10-29T23:33:00.732002257Z" level=info msg="StartContainer for \"b0132ee0c830fd929c70213070c3f9a3f51d6e3acad2e5cf9d3b65d0bd1ae7c5\" returns successfully" Oct 29 23:33:01.082610 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Oct 29 23:33:01.377633 containerd[1503]: time="2025-10-29T23:33:01.377453711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-wwszd,Uid:1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:33:01.378122 containerd[1503]: time="2025-10-29T23:33:01.378077121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmtzx,Uid:6057bac1-e88b-43f7-a462-8ee6bc372920,Namespace:kube-system,Attempt:0,}" Oct 29 23:33:01.378481 containerd[1503]: time="2025-10-29T23:33:01.378213101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jrl9,Uid:1344333c-5c8b-4908-b272-45117ec8f68a,Namespace:calico-system,Attempt:0,}" Oct 29 23:33:01.378481 containerd[1503]: time="2025-10-29T23:33:01.377453390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b49b9964b-6j8qw,Uid:628d1c40-ffd8-42f2-bf12-78878eaf122b,Namespace:calico-system,Attempt:0,}" Oct 29 23:33:01.615853 systemd-networkd[1435]: cali3474fb06eb0: Link UP Oct 29 23:33:01.616664 systemd-networkd[1435]: cali3474fb06eb0: Gained carrier Oct 29 23:33:01.727614 kubelet[2661]: I1029 23:33:01.727082 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lx67m" podStartSLOduration=40.727063298 podStartE2EDuration="40.727063298s" podCreationTimestamp="2025-10-29 23:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:33:01.624952286 +0000 UTC m=+48.344416806" watchObservedRunningTime="2025-10-29 23:33:01.727063298 +0000 UTC m=+48.446527738" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.463 [INFO][4617] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--674644db6--wwszd-eth0 calico-apiserver-674644db6- calico-apiserver 1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af 848 0 2025-10-29 23:32:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:674644db6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-674644db6-wwszd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3474fb06eb0 [] [] }} ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.464 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.503 [INFO][4682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" HandleID="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Workload="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.503 [INFO][4682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" HandleID="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Workload="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000181b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-674644db6-wwszd", "timestamp":"2025-10-29 23:33:01.503549981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.503 [INFO][4682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.503 [INFO][4682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.503 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.514 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.519 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.524 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.526 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.529 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.529 [INFO][4682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.531 [INFO][4682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86 Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.589 [INFO][4682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" host="localhost" Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:33:01.732868 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" HandleID="k8s-pod-network.99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Workload="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.613 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674644db6--wwszd-eth0", GenerateName:"calico-apiserver-674644db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674644db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-674644db6-wwszd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3474fb06eb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.613 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.613 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3474fb06eb0 ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.615 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.618 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--674644db6--wwszd-eth0", GenerateName:"calico-apiserver-674644db6-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"674644db6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86", Pod:"calico-apiserver-674644db6-wwszd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3474fb06eb0", MAC:"4e:82:f8:11:25:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.734195 containerd[1503]: 2025-10-29 23:33:01.728 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" Namespace="calico-apiserver" Pod="calico-apiserver-674644db6-wwszd" WorkloadEndpoint="localhost-k8s-calico--apiserver--674644db6--wwszd-eth0" Oct 29 23:33:01.795838 systemd-networkd[1435]: calif282a2c8530: Link UP Oct 29 23:33:01.796036 systemd-networkd[1435]: calif282a2c8530: Gained carrier Oct 29 23:33:01.797298 containerd[1503]: time="2025-10-29T23:33:01.797219010Z" level=info msg="connecting to shim 99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86" address="unix:///run/containerd/s/7a1e45e1c9e7bcf0e2f4d72b8ef05cbce0909e25cc620e93a2682b3ec4415c9f" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.464 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0 calico-kube-controllers-5b49b9964b- calico-system 628d1c40-ffd8-42f2-bf12-78878eaf122b 847 0 2025-10-29 23:32:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b49b9964b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b49b9964b-6j8qw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif282a2c8530 [] [] }} ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.464 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.504 [INFO][4681] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" HandleID="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Workload="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.504 [INFO][4681] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" HandleID="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Workload="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000431250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b49b9964b-6j8qw", "timestamp":"2025-10-29 23:33:01.504111262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.504 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.610 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.625 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.735 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.744 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.747 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.751 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.751 [INFO][4681] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.755 [INFO][4681] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6 Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.765 [INFO][4681] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.781 [INFO][4681] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.781 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" host="localhost" Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.782 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:33:01.822546 containerd[1503]: 2025-10-29 23:33:01.782 [INFO][4681] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" HandleID="k8s-pod-network.66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Workload="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.786 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0", GenerateName:"calico-kube-controllers-5b49b9964b-", Namespace:"calico-system", SelfLink:"", UID:"628d1c40-ffd8-42f2-bf12-78878eaf122b", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b49b9964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b49b9964b-6j8qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif282a2c8530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.787 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.787 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif282a2c8530 ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.795 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.796 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0", GenerateName:"calico-kube-controllers-5b49b9964b-", Namespace:"calico-system", SelfLink:"", UID:"628d1c40-ffd8-42f2-bf12-78878eaf122b", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b49b9964b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6", Pod:"calico-kube-controllers-5b49b9964b-6j8qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif282a2c8530", MAC:"fe:4f:15:ef:b3:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.823772 containerd[1503]: 2025-10-29 23:33:01.819 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" Namespace="calico-system" Pod="calico-kube-controllers-5b49b9964b-6j8qw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b49b9964b--6j8qw-eth0" Oct 29 23:33:01.832631 systemd[1]: Started cri-containerd-99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86.scope - libcontainer container 99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86. Oct 29 23:33:01.861966 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:33:01.901386 systemd-networkd[1435]: calib68a9a6ef64: Link UP Oct 29 23:33:01.902980 systemd-networkd[1435]: calib68a9a6ef64: Gained carrier Oct 29 23:33:01.904873 containerd[1503]: time="2025-10-29T23:33:01.904829344Z" level=info msg="connecting to shim 66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6" address="unix:///run/containerd/s/cd80493da1d5ec67a506f6c7ad6e8b21921ced721a96faa5a3297a3bfeaca96b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:33:01.908090 containerd[1503]: time="2025-10-29T23:33:01.907906192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-674644db6-wwszd,Uid:1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99888c698ec18c73fac358767e0a2f423ac84be74fb5f677f0fb7fb32b0e4c86\"" Oct 29 23:33:01.914188 containerd[1503]: time="2025-10-29T23:33:01.913506129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.464 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0 coredns-674b8bbfcf- kube-system 6057bac1-e88b-43f7-a462-8ee6bc372920 845 0 2025-10-29 23:32:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xmtzx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib68a9a6ef64 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.464 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.507 [INFO][4691] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" HandleID="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Workload="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.507 [INFO][4691] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" HandleID="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Workload="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb080), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xmtzx", "timestamp":"2025-10-29 23:33:01.507641217 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.507 [INFO][4691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.782 [INFO][4691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.782 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.806 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.835 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.842 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.851 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.854 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.854 [INFO][4691] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.856 [INFO][4691] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.869 [INFO][4691] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.884 [INFO][4691] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.884 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" host="localhost" Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.884 [INFO][4691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:33:01.934036 containerd[1503]: 2025-10-29 23:33:01.884 [INFO][4691] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" HandleID="k8s-pod-network.5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Workload="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.892 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6057bac1-e88b-43f7-a462-8ee6bc372920", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xmtzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib68a9a6ef64", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.892 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.892 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib68a9a6ef64 ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.905 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.906 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6057bac1-e88b-43f7-a462-8ee6bc372920", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d", Pod:"coredns-674b8bbfcf-xmtzx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib68a9a6ef64", MAC:"16:f5:a6:f3:e7:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:01.935104 containerd[1503]: 2025-10-29 23:33:01.931 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" Namespace="kube-system" Pod="coredns-674b8bbfcf-xmtzx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xmtzx-eth0" Oct 29 23:33:01.998503 systemd[1]: Started cri-containerd-66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6.scope - libcontainer container 66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6. Oct 29 23:33:02.001316 systemd-networkd[1435]: cali28283d942aa: Link UP Oct 29 23:33:02.003068 systemd-networkd[1435]: cali28283d942aa: Gained carrier Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.458 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9jrl9-eth0 csi-node-driver- calico-system 1344333c-5c8b-4908-b272-45117ec8f68a 752 0 2025-10-29 23:32:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9jrl9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28283d942aa [] [] }} ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.460 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.513 [INFO][4675] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" HandleID="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Workload="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.513 [INFO][4675] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" HandleID="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Workload="localhost-k8s-csi--node--driver--9jrl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9jrl9", "timestamp":"2025-10-29 23:33:01.513165623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.513 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.885 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.886 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.905 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.940 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.946 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.952 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.956 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.957 [INFO][4675] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.960 [INFO][4675] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.969 [INFO][4675] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.992 [INFO][4675] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.992 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" host="localhost" Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.992 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:33:02.036308 containerd[1503]: 2025-10-29 23:33:01.992 [INFO][4675] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" HandleID="k8s-pod-network.1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Workload="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:01.997 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jrl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1344333c-5c8b-4908-b272-45117ec8f68a", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9jrl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28283d942aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:01.998 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:01.998 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28283d942aa ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:02.001 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:02.004 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jrl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1344333c-5c8b-4908-b272-45117ec8f68a", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb", Pod:"csi-node-driver-9jrl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28283d942aa", MAC:"16:af:8f:44:b5:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:33:02.036801 containerd[1503]: 2025-10-29 23:33:02.032 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" Namespace="calico-system" Pod="csi-node-driver-9jrl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jrl9-eth0" Oct 29 23:33:02.039333 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:33:02.073314 containerd[1503]: time="2025-10-29T23:33:02.073269718Z" level=info msg="connecting to shim 5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d" address="unix:///run/containerd/s/822a84dca921321d4a2c8bef90e058baed10afd4046b0359be9d1f479ca41fbc" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:33:02.085183 containerd[1503]: time="2025-10-29T23:33:02.085140778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b49b9964b-6j8qw,Uid:628d1c40-ffd8-42f2-bf12-78878eaf122b,Namespace:calico-system,Attempt:0,} returns sandbox id \"66ef01cbb71b9a791f85f0bcb41a173261ddc543c01501f0338e1bc1c321e2c6\"" Oct 29 23:33:02.112458 systemd[1]: Started cri-containerd-5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d.scope - libcontainer container 5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d. Oct 29 23:33:02.128911 containerd[1503]: time="2025-10-29T23:33:02.128803270Z" level=info msg="connecting to shim 1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb" address="unix:///run/containerd/s/c8602f9c6e7640f33a68370628466569c3d197d73daff424b17f53437c34c34d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:33:02.130931 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:33:02.136112 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:51578.service - OpenSSH per-connection server daemon (10.0.0.1:51578). Oct 29 23:33:02.151645 systemd[1]: Started cri-containerd-1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb.scope - libcontainer container 1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb. Oct 29 23:33:02.161234 containerd[1503]: time="2025-10-29T23:33:02.161183426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmtzx,Uid:6057bac1-e88b-43f7-a462-8ee6bc372920,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d\"" Oct 29 23:33:02.170003 containerd[1503]: time="2025-10-29T23:33:02.169954722Z" level=info msg="CreateContainer within sandbox \"5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:33:02.172113 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 23:33:02.204418 containerd[1503]: time="2025-10-29T23:33:02.204374491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jrl9,Uid:1344333c-5c8b-4908-b272-45117ec8f68a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d0ead285e6875947edf67cd97f91d060316f947e65c425c9a834d6641089ceb\"" Oct 29 23:33:02.212035 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 51578 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:02.214483 containerd[1503]: time="2025-10-29T23:33:02.214432851Z" level=info msg="Container c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:33:02.214481 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:02.219621 systemd-logind[1486]: New session 9 of user core. Oct 29 23:33:02.224250 containerd[1503]: time="2025-10-29T23:33:02.224190328Z" level=info msg="CreateContainer within sandbox \"5e06946174496666b31c35afd4df9060990a39b6fe457b85f390a2570c4a5c9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74\"" Oct 29 23:33:02.224943 containerd[1503]: time="2025-10-29T23:33:02.224746928Z" level=info msg="StartContainer for \"c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74\"" Oct 29 23:33:02.225808 containerd[1503]: time="2025-10-29T23:33:02.225775915Z" level=info msg="connecting to shim c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74" address="unix:///run/containerd/s/822a84dca921321d4a2c8bef90e058baed10afd4046b0359be9d1f479ca41fbc" protocol=ttrpc version=3 Oct 29 23:33:02.228523 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 23:33:02.249466 systemd[1]: Started cri-containerd-c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74.scope - libcontainer container c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74. Oct 29 23:33:02.280010 containerd[1503]: time="2025-10-29T23:33:02.279970435Z" level=info msg="StartContainer for \"c38eff3a3c2bca2bb104e2f4a2b13fe40dcdc019a3dbfe7235cc84d513117e74\" returns successfully" Oct 29 23:33:02.441601 sshd[4949]: Connection closed by 10.0.0.1 port 51578 Oct 29 23:33:02.441944 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:02.447994 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:51578.service: Deactivated successfully. Oct 29 23:33:02.451651 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 23:33:02.452806 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Oct 29 23:33:02.455638 systemd-logind[1486]: Removed session 9. Oct 29 23:33:02.553522 systemd-networkd[1435]: cali1f5e79e70f2: Gained IPv6LL Oct 29 23:33:02.753193 containerd[1503]: time="2025-10-29T23:33:02.753116103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:02.754153 containerd[1503]: time="2025-10-29T23:33:02.754115567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:02.754228 containerd[1503]: time="2025-10-29T23:33:02.754194258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:02.754432 kubelet[2661]: E1029 23:33:02.754372 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:02.754901 kubelet[2661]: E1029 23:33:02.754719 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:02.755041 kubelet[2661]: E1029 23:33:02.754988 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpx5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-wwszd_calico-apiserver(1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:02.755196 containerd[1503]: time="2025-10-29T23:33:02.755056621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:33:02.756334 kubelet[2661]: E1029 23:33:02.756220 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:33:03.065471 systemd-networkd[1435]: cali3474fb06eb0: Gained IPv6LL Oct 29 23:33:03.129434 systemd-networkd[1435]: calif282a2c8530: Gained IPv6LL Oct 29 23:33:03.168588 containerd[1503]: time="2025-10-29T23:33:03.168528448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:03.169722 containerd[1503]: time="2025-10-29T23:33:03.169653406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:33:03.169858 containerd[1503]: time="2025-10-29T23:33:03.169718015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:03.170023 kubelet[2661]: E1029 23:33:03.169956 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:03.170071 kubelet[2661]: E1029 23:33:03.170038 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:03.170345 kubelet[2661]: E1029 23:33:03.170296 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw49n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b49b9964b-6j8qw_calico-system(628d1c40-ffd8-42f2-bf12-78878eaf122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:03.170845 containerd[1503]: time="2025-10-29T23:33:03.170798847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:33:03.171564 kubelet[2661]: E1029 23:33:03.171522 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:33:03.193473 systemd-networkd[1435]: cali28283d942aa: Gained IPv6LL Oct 29 23:33:03.571186 kubelet[2661]: E1029 23:33:03.570614 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:33:03.571186 kubelet[2661]: E1029 23:33:03.570683 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:33:03.616442 kubelet[2661]: I1029 23:33:03.615562 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xmtzx" podStartSLOduration=42.615544742 podStartE2EDuration="42.615544742s" podCreationTimestamp="2025-10-29 23:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:33:02.575188947 +0000 UTC m=+49.294653387" watchObservedRunningTime="2025-10-29 23:33:03.615544742 +0000 UTC m=+50.335009182" Oct 29 23:33:03.641591 systemd-networkd[1435]: calib68a9a6ef64: Gained IPv6LL Oct 29 23:33:03.685723 containerd[1503]: time="2025-10-29T23:33:03.685533910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:03.689949 containerd[1503]: time="2025-10-29T23:33:03.689805271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:33:03.690099 containerd[1503]: time="2025-10-29T23:33:03.689862879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:33:03.690279 kubelet[2661]: E1029 23:33:03.690211 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:03.690343 kubelet[2661]: E1029 23:33:03.690299 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:03.690471 kubelet[2661]: E1029 23:33:03.690434 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:03.692770 containerd[1503]: time="2025-10-29T23:33:03.692726122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:33:04.185446 containerd[1503]: time="2025-10-29T23:33:04.185377367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:04.205176 containerd[1503]: time="2025-10-29T23:33:04.205057810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:33:04.205176 containerd[1503]: time="2025-10-29T23:33:04.205111977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:33:04.205352 kubelet[2661]: E1029 23:33:04.205309 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:04.205606 kubelet[2661]: E1029 23:33:04.205355 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:04.205606 kubelet[2661]: E1029 23:33:04.205487 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:04.206725 kubelet[2661]: E1029 23:33:04.206654 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:33:04.380679 containerd[1503]: time="2025-10-29T23:33:04.380622342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:33:04.572260 kubelet[2661]: E1029 23:33:04.571933 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:33:04.604805 containerd[1503]: time="2025-10-29T23:33:04.604651460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:04.605742 containerd[1503]: time="2025-10-29T23:33:04.605636796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:33:04.605838 containerd[1503]: time="2025-10-29T23:33:04.605816221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:33:04.606311 kubelet[2661]: E1029 23:33:04.606066 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:04.606311 kubelet[2661]: E1029 23:33:04.606110 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:04.606311 kubelet[2661]: E1029 23:33:04.606233 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c7cbba715eb34548b2bdb54b4c82c9c6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:04.608260 containerd[1503]: time="2025-10-29T23:33:04.608211432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:33:04.812927 containerd[1503]: time="2025-10-29T23:33:04.812747493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:04.813927 containerd[1503]: time="2025-10-29T23:33:04.813821682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:33:04.813927 containerd[1503]: time="2025-10-29T23:33:04.813849446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:04.814355 kubelet[2661]: E1029 23:33:04.814149 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:04.814355 kubelet[2661]: E1029 23:33:04.814199 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:04.814547 kubelet[2661]: E1029 23:33:04.814499 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:04.815850 kubelet[2661]: E1029 23:33:04.815808 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68d6cc8c8-bzrtz" podUID="f5482b89-57c5-421d-ae18-209fa743d482" Oct 29 23:33:07.462129 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:51594.service - OpenSSH per-connection server daemon (10.0.0.1:51594). Oct 29 23:33:07.524097 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 51594 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:07.525522 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:07.529992 systemd-logind[1486]: New session 10 of user core. Oct 29 23:33:07.537608 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 23:33:07.708275 sshd[5006]: Connection closed by 10.0.0.1 port 51594 Oct 29 23:33:07.708983 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:07.719655 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:51594.service: Deactivated successfully. Oct 29 23:33:07.722931 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 23:33:07.723983 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Oct 29 23:33:07.727518 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Oct 29 23:33:07.729224 systemd-logind[1486]: Removed session 10. Oct 29 23:33:07.784456 sshd[5021]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:07.785981 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:07.790327 systemd-logind[1486]: New session 11 of user core. Oct 29 23:33:07.801478 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 23:33:07.994507 sshd[5024]: Connection closed by 10.0.0.1 port 51608 Oct 29 23:33:07.996592 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:08.005623 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:51608.service: Deactivated successfully. Oct 29 23:33:08.007398 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 23:33:08.008228 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Oct 29 23:33:08.015530 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:51614.service - OpenSSH per-connection server daemon (10.0.0.1:51614). Oct 29 23:33:08.016566 systemd-logind[1486]: Removed session 11. Oct 29 23:33:08.074123 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 51614 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:08.075542 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:08.080034 systemd-logind[1486]: New session 12 of user core. Oct 29 23:33:08.089454 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 23:33:08.221864 sshd[5039]: Connection closed by 10.0.0.1 port 51614 Oct 29 23:33:08.221979 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:08.225822 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Oct 29 23:33:08.226103 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:51614.service: Deactivated successfully. Oct 29 23:33:08.229122 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 23:33:08.230709 systemd-logind[1486]: Removed session 12. Oct 29 23:33:11.380221 containerd[1503]: time="2025-10-29T23:33:11.380092609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:11.611567 containerd[1503]: time="2025-10-29T23:33:11.611501965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:11.612552 containerd[1503]: time="2025-10-29T23:33:11.612498050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:11.612552 containerd[1503]: time="2025-10-29T23:33:11.612530494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:11.613091 kubelet[2661]: E1029 23:33:11.612693 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:11.613091 kubelet[2661]: E1029 23:33:11.612755 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:11.613091 kubelet[2661]: E1029 23:33:11.612888 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-bb7xs_calico-apiserver(aa5b4e14-b80b-4af9-80b3-9f6279c72ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:11.614262 kubelet[2661]: E1029 23:33:11.614033 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:33:13.242299 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:49556.service - OpenSSH per-connection server daemon (10.0.0.1:49556). Oct 29 23:33:13.302891 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 49556 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:13.304233 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:13.310065 systemd-logind[1486]: New session 13 of user core. Oct 29 23:33:13.321465 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 23:33:13.454012 sshd[5069]: Connection closed by 10.0.0.1 port 49556 Oct 29 23:33:13.454458 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:13.457895 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Oct 29 23:33:13.458166 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:49556.service: Deactivated successfully. Oct 29 23:33:13.460245 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 23:33:13.461954 systemd-logind[1486]: Removed session 13. Oct 29 23:33:14.379217 containerd[1503]: time="2025-10-29T23:33:14.378926148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:33:14.599779 containerd[1503]: time="2025-10-29T23:33:14.599725384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:14.600808 containerd[1503]: time="2025-10-29T23:33:14.600751749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:33:14.600866 containerd[1503]: time="2025-10-29T23:33:14.600835800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:14.601050 kubelet[2661]: E1029 23:33:14.601003 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:14.601361 kubelet[2661]: E1029 23:33:14.601051 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:14.601361 kubelet[2661]: E1029 23:33:14.601298 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s28g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hrtth_calico-system(dbd38070-358f-48dd-8533-17b5e6ecf6e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:14.602834 containerd[1503]: time="2025-10-29T23:33:14.601797237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:33:14.602908 kubelet[2661]: E1029 23:33:14.602783 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:33:14.809529 containerd[1503]: time="2025-10-29T23:33:14.809471276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:14.810544 containerd[1503]: time="2025-10-29T23:33:14.810493760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:33:14.810604 containerd[1503]: time="2025-10-29T23:33:14.810575530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:14.810761 kubelet[2661]: E1029 23:33:14.810707 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:14.810814 kubelet[2661]: E1029 23:33:14.810765 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:14.811288 kubelet[2661]: E1029 23:33:14.810895 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw49n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b49b9964b-6j8qw_calico-system(628d1c40-ffd8-42f2-bf12-78878eaf122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:14.812102 kubelet[2661]: E1029 23:33:14.812050 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:33:15.379637 containerd[1503]: time="2025-10-29T23:33:15.379577717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:15.597401 containerd[1503]: time="2025-10-29T23:33:15.597340054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:15.598395 containerd[1503]: time="2025-10-29T23:33:15.598360777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:15.598515 containerd[1503]: time="2025-10-29T23:33:15.598402342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:15.598840 kubelet[2661]: E1029 23:33:15.598601 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:15.598840 kubelet[2661]: E1029 23:33:15.598670 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:15.598963 kubelet[2661]: E1029 23:33:15.598811 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpx5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-wwszd_calico-apiserver(1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:15.600207 kubelet[2661]: E1029 23:33:15.600165 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:33:18.380127 containerd[1503]: time="2025-10-29T23:33:18.380077406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:33:18.466863 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:49566.service - OpenSSH per-connection server daemon (10.0.0.1:49566). Oct 29 23:33:18.516862 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 49566 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:18.519470 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:18.525192 systemd-logind[1486]: New session 14 of user core. Oct 29 23:33:18.535424 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 23:33:18.580345 containerd[1503]: time="2025-10-29T23:33:18.580297372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:18.581453 containerd[1503]: time="2025-10-29T23:33:18.581414114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:33:18.581515 containerd[1503]: time="2025-10-29T23:33:18.581497802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:33:18.583009 kubelet[2661]: E1029 23:33:18.581750 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:18.583009 kubelet[2661]: E1029 23:33:18.581978 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:18.583009 kubelet[2661]: E1029 23:33:18.582277 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:18.585424 containerd[1503]: time="2025-10-29T23:33:18.584819946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:33:18.674346 sshd[5087]: Connection closed by 10.0.0.1 port 49566 Oct 29 23:33:18.674721 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:18.679376 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:49566.service: Deactivated successfully. Oct 29 23:33:18.681790 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 23:33:18.682616 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Oct 29 23:33:18.683871 systemd-logind[1486]: Removed session 14. Oct 29 23:33:18.797763 containerd[1503]: time="2025-10-29T23:33:18.797721835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:18.798719 containerd[1503]: time="2025-10-29T23:33:18.798685083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:33:18.798796 containerd[1503]: time="2025-10-29T23:33:18.798767171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:33:18.798931 kubelet[2661]: E1029 23:33:18.798896 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:18.798979 kubelet[2661]: E1029 23:33:18.798944 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:18.799116 kubelet[2661]: E1029 23:33:18.799071 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:18.800371 kubelet[2661]: E1029 23:33:18.800325 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:33:19.379975 kubelet[2661]: E1029 23:33:19.379925 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68d6cc8c8-bzrtz" podUID="f5482b89-57c5-421d-ae18-209fa743d482" Oct 29 23:33:22.378326 kubelet[2661]: E1029 23:33:22.377941 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:33:23.697026 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:49128.service - OpenSSH per-connection server daemon (10.0.0.1:49128). Oct 29 23:33:23.746749 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 49128 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:23.748056 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:23.754634 systemd-logind[1486]: New session 15 of user core. Oct 29 23:33:23.768470 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 23:33:23.897947 sshd[5113]: Connection closed by 10.0.0.1 port 49128 Oct 29 23:33:23.898065 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:23.901819 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:49128.service: Deactivated successfully. Oct 29 23:33:23.903702 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 23:33:23.904512 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Oct 29 23:33:23.905846 systemd-logind[1486]: Removed session 15. Oct 29 23:33:26.378472 kubelet[2661]: E1029 23:33:26.378399 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:33:28.077116 containerd[1503]: time="2025-10-29T23:33:28.077060955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a468c98d7813cda9d3c96adf1d4549ad10f639dac3cd934863987d2a1d61cd3\" id:\"30af909c1a2a57d3628601edb09b3a32839d98ef422f106963a44df50a4a459e\" pid:5139 exited_at:{seconds:1761780808 nanos:76556034}" Oct 29 23:33:28.912989 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:49130.service - OpenSSH per-connection server daemon (10.0.0.1:49130). Oct 29 23:33:28.963906 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 49130 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:28.965910 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:28.971733 systemd-logind[1486]: New session 16 of user core. Oct 29 23:33:28.980420 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 23:33:29.143379 sshd[5156]: Connection closed by 10.0.0.1 port 49130 Oct 29 23:33:29.143748 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:29.156902 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:49130.service: Deactivated successfully. Oct 29 23:33:29.159612 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 23:33:29.160322 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Oct 29 23:33:29.162585 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:49134.service - OpenSSH per-connection server daemon (10.0.0.1:49134). Oct 29 23:33:29.163929 systemd-logind[1486]: Removed session 16. Oct 29 23:33:29.231963 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 49134 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:29.233408 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:29.238001 systemd-logind[1486]: New session 17 of user core. Oct 29 23:33:29.247439 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 23:33:29.381338 kubelet[2661]: E1029 23:33:29.380648 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:33:29.381338 kubelet[2661]: E1029 23:33:29.380736 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hrtth" podUID="dbd38070-358f-48dd-8533-17b5e6ecf6e9" Oct 29 23:33:29.381992 kubelet[2661]: E1029 23:33:29.381956 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:33:29.478408 sshd[5173]: Connection closed by 10.0.0.1 port 49134 Oct 29 23:33:29.479023 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:29.488420 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:49134.service: Deactivated successfully. Oct 29 23:33:29.490019 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 23:33:29.491769 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Oct 29 23:33:29.494226 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:47618.service - OpenSSH per-connection server daemon (10.0.0.1:47618). Oct 29 23:33:29.495391 systemd-logind[1486]: Removed session 17. Oct 29 23:33:29.556934 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 47618 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:29.558354 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:29.563366 systemd-logind[1486]: New session 18 of user core. Oct 29 23:33:29.574434 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 23:33:30.201794 sshd[5189]: Connection closed by 10.0.0.1 port 47618 Oct 29 23:33:30.202439 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:30.212540 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:47618.service: Deactivated successfully. Oct 29 23:33:30.215891 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 23:33:30.216759 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Oct 29 23:33:30.219916 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:47628.service - OpenSSH per-connection server daemon (10.0.0.1:47628). Oct 29 23:33:30.220719 systemd-logind[1486]: Removed session 18. Oct 29 23:33:30.281701 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 47628 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:30.283833 sshd-session[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:30.292306 systemd-logind[1486]: New session 19 of user core. Oct 29 23:33:30.298447 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 23:33:30.645308 sshd[5212]: Connection closed by 10.0.0.1 port 47628 Oct 29 23:33:30.646432 sshd-session[5209]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:30.658257 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:47628.service: Deactivated successfully. Oct 29 23:33:30.660051 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 23:33:30.661144 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Oct 29 23:33:30.665294 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:47644.service - OpenSSH per-connection server daemon (10.0.0.1:47644). Oct 29 23:33:30.665977 systemd-logind[1486]: Removed session 19. Oct 29 23:33:30.720799 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 47644 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:30.722313 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:30.727195 systemd-logind[1486]: New session 20 of user core. Oct 29 23:33:30.738453 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 23:33:30.894158 sshd[5228]: Connection closed by 10.0.0.1 port 47644 Oct 29 23:33:30.894552 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:30.898322 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:47644.service: Deactivated successfully. Oct 29 23:33:30.901871 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 23:33:30.902947 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Oct 29 23:33:30.904168 systemd-logind[1486]: Removed session 20. Oct 29 23:33:32.378379 containerd[1503]: time="2025-10-29T23:33:32.378312488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:33:32.589730 containerd[1503]: time="2025-10-29T23:33:32.589681095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:32.591615 containerd[1503]: time="2025-10-29T23:33:32.591516466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:33:32.591615 containerd[1503]: time="2025-10-29T23:33:32.591560663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:33:32.593707 kubelet[2661]: E1029 23:33:32.592581 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:32.593707 kubelet[2661]: E1029 23:33:32.592636 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:32.593707 kubelet[2661]: E1029 23:33:32.592750 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c7cbba715eb34548b2bdb54b4c82c9c6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:32.596436 containerd[1503]: time="2025-10-29T23:33:32.596373539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:33:32.821925 containerd[1503]: time="2025-10-29T23:33:32.821600045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:32.824784 containerd[1503]: time="2025-10-29T23:33:32.824680543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:33:32.824784 containerd[1503]: time="2025-10-29T23:33:32.824729980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:32.825738 kubelet[2661]: E1029 23:33:32.825593 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:32.825738 kubelet[2661]: E1029 23:33:32.825645 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:32.826664 kubelet[2661]: E1029 23:33:32.826557 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhdhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68d6cc8c8-bzrtz_calico-system(f5482b89-57c5-421d-ae18-209fa743d482): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:32.827983 kubelet[2661]: E1029 23:33:32.827935 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68d6cc8c8-bzrtz" podUID="f5482b89-57c5-421d-ae18-209fa743d482" Oct 29 23:33:35.908458 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:47654.service - OpenSSH per-connection server daemon (10.0.0.1:47654). Oct 29 23:33:35.957716 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 47654 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:35.959083 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:35.964852 systemd-logind[1486]: New session 21 of user core. Oct 29 23:33:35.973441 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 23:33:36.141689 sshd[5251]: Connection closed by 10.0.0.1 port 47654 Oct 29 23:33:36.142352 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:36.147421 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:47654.service: Deactivated successfully. Oct 29 23:33:36.149569 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 23:33:36.152231 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Oct 29 23:33:36.153785 systemd-logind[1486]: Removed session 21. Oct 29 23:33:36.386822 containerd[1503]: time="2025-10-29T23:33:36.386755607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:36.595031 containerd[1503]: time="2025-10-29T23:33:36.594975197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:36.596056 containerd[1503]: time="2025-10-29T23:33:36.596000033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:36.596106 containerd[1503]: time="2025-10-29T23:33:36.596078630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:36.596303 kubelet[2661]: E1029 23:33:36.596263 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:36.596647 kubelet[2661]: E1029 23:33:36.596313 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:36.598780 kubelet[2661]: E1029 23:33:36.596552 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-bb7xs_calico-apiserver(aa5b4e14-b80b-4af9-80b3-9f6279c72ae8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:36.599927 kubelet[2661]: E1029 23:33:36.599871 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-bb7xs" podUID="aa5b4e14-b80b-4af9-80b3-9f6279c72ae8" Oct 29 23:33:39.378887 containerd[1503]: time="2025-10-29T23:33:39.378018897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:33:39.606921 containerd[1503]: time="2025-10-29T23:33:39.606874626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:39.608223 containerd[1503]: time="2025-10-29T23:33:39.608120987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:33:39.608223 containerd[1503]: time="2025-10-29T23:33:39.608186304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:39.608425 kubelet[2661]: E1029 23:33:39.608354 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:39.608425 kubelet[2661]: E1029 23:33:39.608406 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:39.608831 kubelet[2661]: E1029 23:33:39.608558 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw49n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b49b9964b-6j8qw_calico-system(628d1c40-ffd8-42f2-bf12-78878eaf122b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:39.609752 kubelet[2661]: E1029 23:33:39.609717 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b49b9964b-6j8qw" podUID="628d1c40-ffd8-42f2-bf12-78878eaf122b" Oct 29 23:33:40.379587 containerd[1503]: time="2025-10-29T23:33:40.379445000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:40.596943 containerd[1503]: time="2025-10-29T23:33:40.596879838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:40.659191 containerd[1503]: time="2025-10-29T23:33:40.659061025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:40.659191 containerd[1503]: time="2025-10-29T23:33:40.659139542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:40.660308 kubelet[2661]: E1029 23:33:40.659330 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:40.660308 kubelet[2661]: E1029 23:33:40.659393 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:40.660308 kubelet[2661]: E1029 23:33:40.659918 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mpx5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-674644db6-wwszd_calico-apiserver(1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:40.660731 containerd[1503]: time="2025-10-29T23:33:40.659755205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:33:40.661120 kubelet[2661]: E1029 23:33:40.661065 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-674644db6-wwszd" podUID="1e0f27ca-ac8c-40e5-bfd4-e2be1f0ea6af" Oct 29 23:33:40.973555 containerd[1503]: time="2025-10-29T23:33:40.973415618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:40.975637 containerd[1503]: time="2025-10-29T23:33:40.975576036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:33:40.975767 containerd[1503]: time="2025-10-29T23:33:40.975660114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:33:40.975927 kubelet[2661]: E1029 23:33:40.975885 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:40.975979 kubelet[2661]: E1029 23:33:40.975941 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:40.976135 kubelet[2661]: E1029 23:33:40.976085 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:40.979235 containerd[1503]: time="2025-10-29T23:33:40.979173014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:33:41.155371 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:60174.service - OpenSSH per-connection server daemon (10.0.0.1:60174). Oct 29 23:33:41.217537 containerd[1503]: time="2025-10-29T23:33:41.217485213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:41.219211 containerd[1503]: time="2025-10-29T23:33:41.219154251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:33:41.220467 containerd[1503]: time="2025-10-29T23:33:41.220425779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:33:41.221647 kubelet[2661]: E1029 23:33:41.221604 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:41.222295 kubelet[2661]: E1029 23:33:41.221661 2661 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:41.222295 kubelet[2661]: E1029 23:33:41.222147 2661 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbh69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jrl9_calico-system(1344333c-5c8b-4908-b272-45117ec8f68a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:41.223767 kubelet[2661]: E1029 23:33:41.223659 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jrl9" podUID="1344333c-5c8b-4908-b272-45117ec8f68a" Oct 29 23:33:41.234928 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 60174 ssh2: RSA SHA256:oHJZtFP2oI9hKS5VRrhOdWzn5ftpIPt0FyPpfDRoisU Oct 29 23:33:41.236378 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:41.244155 systemd-logind[1486]: New session 22 of user core. Oct 29 23:33:41.248455 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 23:33:41.429781 sshd[5275]: Connection closed by 10.0.0.1 port 60174 Oct 29 23:33:41.430894 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:41.434543 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:60174.service: Deactivated successfully. Oct 29 23:33:41.436887 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 23:33:41.438000 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Oct 29 23:33:41.440461 systemd-logind[1486]: Removed session 22.