Oct 30 23:57:28.780355 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 30 23:57:28.780397 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Oct 30 22:10:35 -00 2025 Oct 30 23:57:28.780407 kernel: KASLR enabled Oct 30 23:57:28.780413 kernel: efi: EFI v2.7 by EDK II Oct 30 23:57:28.780418 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 30 23:57:28.780424 kernel: random: crng init done Oct 30 23:57:28.780430 kernel: secureboot: Secure boot disabled Oct 30 23:57:28.780436 kernel: ACPI: Early table checksum verification disabled Oct 30 23:57:28.780441 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 30 23:57:28.780448 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 30 23:57:28.780454 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780460 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780465 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780471 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780478 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780485 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780492 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780497 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780503 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 23:57:28.780509 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 30 23:57:28.780515 kernel: ACPI: Use ACPI SPCR as default console: No Oct 30 23:57:28.780521 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 23:57:28.780527 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 30 23:57:28.780533 kernel: Zone ranges: Oct 30 23:57:28.780539 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 23:57:28.780546 kernel: DMA32 empty Oct 30 23:57:28.780552 kernel: Normal empty Oct 30 23:57:28.780558 kernel: Device empty Oct 30 23:57:28.780564 kernel: Movable zone start for each node Oct 30 23:57:28.780570 kernel: Early memory node ranges Oct 30 23:57:28.780676 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 30 23:57:28.780684 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 30 23:57:28.780690 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 30 23:57:28.780696 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 30 23:57:28.780702 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 30 23:57:28.780708 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 30 23:57:28.780714 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 30 23:57:28.780723 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 30 23:57:28.780730 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 30 23:57:28.780736 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 30 23:57:28.780744 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 30 23:57:28.780751 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 30 23:57:28.780757 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 30 23:57:28.780765 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 23:57:28.780771 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 30 23:57:28.780777 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 30 23:57:28.780784 kernel: psci: probing for conduit method from ACPI. Oct 30 23:57:28.780790 kernel: psci: PSCIv1.1 detected in firmware. Oct 30 23:57:28.780796 kernel: psci: Using standard PSCI v0.2 function IDs Oct 30 23:57:28.780802 kernel: psci: Trusted OS migration not required Oct 30 23:57:28.780809 kernel: psci: SMC Calling Convention v1.1 Oct 30 23:57:28.780815 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 30 23:57:28.780822 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 30 23:57:28.780829 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 30 23:57:28.780836 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 30 23:57:28.780842 kernel: Detected PIPT I-cache on CPU0 Oct 30 23:57:28.780849 kernel: CPU features: detected: GIC system register CPU interface Oct 30 23:57:28.780855 kernel: CPU features: detected: Spectre-v4 Oct 30 23:57:28.780861 kernel: CPU features: detected: Spectre-BHB Oct 30 23:57:28.780868 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 30 23:57:28.780874 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 30 23:57:28.780881 kernel: CPU features: detected: ARM erratum 1418040 Oct 30 23:57:28.780887 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 30 23:57:28.780893 kernel: alternatives: applying boot alternatives Oct 30 23:57:28.780901 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fe9a0b97d6cc3ae4bb51413f0dcf8829730d06a1c56255949a7891220815365c Oct 30 23:57:28.780909 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 23:57:28.780915 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 23:57:28.780922 kernel: Fallback order for Node 0: 0 Oct 30 23:57:28.780928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 30 23:57:28.780934 kernel: Policy zone: DMA Oct 30 23:57:28.780940 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 23:57:28.780947 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 30 23:57:28.780953 kernel: software IO TLB: area num 4. Oct 30 23:57:28.780959 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 30 23:57:28.780966 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 30 23:57:28.780972 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 23:57:28.780980 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 23:57:28.780987 kernel: rcu: RCU event tracing is enabled. Oct 30 23:57:28.780993 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 23:57:28.781000 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 23:57:28.781006 kernel: Tracing variant of Tasks RCU enabled. Oct 30 23:57:28.781013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 23:57:28.781025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 23:57:28.781031 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 23:57:28.781038 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 23:57:28.781045 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 30 23:57:28.781051 kernel: GICv3: 256 SPIs implemented Oct 30 23:57:28.781060 kernel: GICv3: 0 Extended SPIs implemented Oct 30 23:57:28.781067 kernel: Root IRQ handler: gic_handle_irq Oct 30 23:57:28.781073 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 30 23:57:28.781080 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 30 23:57:28.781087 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 30 23:57:28.781093 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 30 23:57:28.781100 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 30 23:57:28.781107 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 30 23:57:28.781113 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 30 23:57:28.781120 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 30 23:57:28.781127 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 23:57:28.781133 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:57:28.781141 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 30 23:57:28.781148 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 30 23:57:28.781155 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 30 23:57:28.781162 kernel: arm-pv: using stolen time PV Oct 30 23:57:28.781169 kernel: Console: colour dummy device 80x25 Oct 30 23:57:28.781175 kernel: ACPI: Core revision 20240827 Oct 30 23:57:28.781182 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 30 23:57:28.781189 kernel: pid_max: default: 32768 minimum: 301 Oct 30 23:57:28.781196 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 23:57:28.781202 kernel: landlock: Up and running. Oct 30 23:57:28.781210 kernel: SELinux: Initializing. Oct 30 23:57:28.781217 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:57:28.781224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 23:57:28.781231 kernel: rcu: Hierarchical SRCU implementation. Oct 30 23:57:28.781237 kernel: rcu: Max phase no-delay instances is 400. Oct 30 23:57:28.781244 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 23:57:28.781251 kernel: Remapping and enabling EFI services. Oct 30 23:57:28.781257 kernel: smp: Bringing up secondary CPUs ... Oct 30 23:57:28.781264 kernel: Detected PIPT I-cache on CPU1 Oct 30 23:57:28.781275 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 30 23:57:28.781282 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 30 23:57:28.781290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:57:28.781298 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 30 23:57:28.781304 kernel: Detected PIPT I-cache on CPU2 Oct 30 23:57:28.781312 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 30 23:57:28.781319 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 30 23:57:28.781326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:57:28.781334 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 30 23:57:28.781341 kernel: Detected PIPT I-cache on CPU3 Oct 30 23:57:28.781348 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 30 23:57:28.781355 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 30 23:57:28.781362 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 23:57:28.781368 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 30 23:57:28.781375 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 23:57:28.781398 kernel: SMP: Total of 4 processors activated. Oct 30 23:57:28.781406 kernel: CPU: All CPU(s) started at EL1 Oct 30 23:57:28.781415 kernel: CPU features: detected: 32-bit EL0 Support Oct 30 23:57:28.781422 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 30 23:57:28.781429 kernel: CPU features: detected: Common not Private translations Oct 30 23:57:28.781436 kernel: CPU features: detected: CRC32 instructions Oct 30 23:57:28.781443 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 30 23:57:28.781450 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 30 23:57:28.781457 kernel: CPU features: detected: LSE atomic instructions Oct 30 23:57:28.781463 kernel: CPU features: detected: Privileged Access Never Oct 30 23:57:28.781470 kernel: CPU features: detected: RAS Extension Support Oct 30 23:57:28.781479 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 30 23:57:28.781486 kernel: alternatives: applying system-wide alternatives Oct 30 23:57:28.781493 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 30 23:57:28.781500 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Oct 30 23:57:28.781508 kernel: devtmpfs: initialized Oct 30 23:57:28.781515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 23:57:28.781522 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 23:57:28.781529 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 30 23:57:28.781536 kernel: 0 pages in range for non-PLT usage Oct 30 23:57:28.781544 kernel: 508560 pages in range for PLT usage Oct 30 23:57:28.781551 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 23:57:28.781558 kernel: SMBIOS 3.0.0 present. Oct 30 23:57:28.781564 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 30 23:57:28.781571 kernel: DMI: Memory slots populated: 1/1 Oct 30 23:57:28.781586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 23:57:28.781593 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 30 23:57:28.781600 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 30 23:57:28.781607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 30 23:57:28.781616 kernel: audit: initializing netlink subsys (disabled) Oct 30 23:57:28.781622 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 30 23:57:28.781629 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 23:57:28.781636 kernel: cpuidle: using governor menu Oct 30 23:57:28.781643 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 30 23:57:28.781650 kernel: ASID allocator initialised with 32768 entries Oct 30 23:57:28.781657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 23:57:28.781676 kernel: Serial: AMBA PL011 UART driver Oct 30 23:57:28.781683 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 23:57:28.781691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 23:57:28.781698 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 30 23:57:28.781705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 30 23:57:28.781712 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 23:57:28.781719 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 23:57:28.781726 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 30 23:57:28.781733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 30 23:57:28.781739 kernel: ACPI: Added _OSI(Module Device) Oct 30 23:57:28.781746 kernel: ACPI: Added _OSI(Processor Device) Oct 30 23:57:28.781754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 23:57:28.781761 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 23:57:28.781768 kernel: ACPI: Interpreter enabled Oct 30 23:57:28.781775 kernel: ACPI: Using GIC for interrupt routing Oct 30 23:57:28.781782 kernel: ACPI: MCFG table detected, 1 entries Oct 30 23:57:28.781789 kernel: ACPI: CPU0 has been hot-added Oct 30 23:57:28.781796 kernel: ACPI: CPU1 has been hot-added Oct 30 23:57:28.781803 kernel: ACPI: CPU2 has been hot-added Oct 30 23:57:28.781809 kernel: ACPI: CPU3 has been hot-added Oct 30 23:57:28.781816 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 30 23:57:28.781825 kernel: printk: legacy console [ttyAMA0] enabled Oct 30 23:57:28.781832 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 23:57:28.781971 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 23:57:28.782035 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 30 23:57:28.782093 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 30 23:57:28.782148 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 30 23:57:28.782202 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 30 23:57:28.782214 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 30 23:57:28.782221 kernel: PCI host bridge to bus 0000:00 Oct 30 23:57:28.782285 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 30 23:57:28.782338 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 30 23:57:28.782415 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 30 23:57:28.782473 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 23:57:28.782551 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 30 23:57:28.782638 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 23:57:28.782700 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 30 23:57:28.782758 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 30 23:57:28.782815 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 30 23:57:28.782872 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 30 23:57:28.782930 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 30 23:57:28.782991 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 30 23:57:28.783043 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 30 23:57:28.783094 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 30 23:57:28.783145 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 30 23:57:28.783154 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 30 23:57:28.783161 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 30 23:57:28.783168 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 30 23:57:28.783175 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 30 23:57:28.783183 kernel: iommu: Default domain type: Translated Oct 30 23:57:28.783190 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 30 23:57:28.783197 kernel: efivars: Registered efivars operations Oct 30 23:57:28.783204 kernel: vgaarb: loaded Oct 30 23:57:28.783211 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 30 23:57:28.783218 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 23:57:28.783225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 23:57:28.783232 kernel: pnp: PnP ACPI init Oct 30 23:57:28.783304 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 30 23:57:28.783315 kernel: pnp: PnP ACPI: found 1 devices Oct 30 23:57:28.783322 kernel: NET: Registered PF_INET protocol family Oct 30 23:57:28.783329 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 23:57:28.783336 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 23:57:28.783344 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 23:57:28.783351 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 23:57:28.783358 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 23:57:28.783365 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 23:57:28.783373 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:57:28.783391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 23:57:28.783399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 23:57:28.783410 kernel: PCI: CLS 0 bytes, default 64 Oct 30 23:57:28.783418 kernel: kvm [1]: HYP mode not available Oct 30 23:57:28.783425 kernel: Initialise system trusted keyrings Oct 30 23:57:28.783432 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 23:57:28.783439 kernel: Key type asymmetric registered Oct 30 23:57:28.783446 kernel: Asymmetric key parser 'x509' registered Oct 30 23:57:28.783456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 30 23:57:28.783463 kernel: io scheduler mq-deadline registered Oct 30 23:57:28.783470 kernel: io scheduler kyber registered Oct 30 23:57:28.783477 kernel: io scheduler bfq registered Oct 30 23:57:28.783484 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 30 23:57:28.783491 kernel: ACPI: button: Power Button [PWRB] Oct 30 23:57:28.783498 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 30 23:57:28.783564 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 30 23:57:28.783580 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 23:57:28.783589 kernel: thunder_xcv, ver 1.0 Oct 30 23:57:28.783596 kernel: thunder_bgx, ver 1.0 Oct 30 23:57:28.783603 kernel: nicpf, ver 1.0 Oct 30 23:57:28.783610 kernel: nicvf, ver 1.0 Oct 30 23:57:28.783679 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 30 23:57:28.783734 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-30T23:57:28 UTC (1761868648) Oct 30 23:57:28.783743 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 30 23:57:28.783751 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 30 23:57:28.783759 kernel: watchdog: NMI not fully supported Oct 30 23:57:28.783766 kernel: watchdog: Hard watchdog permanently disabled Oct 30 23:57:28.783773 kernel: NET: Registered PF_INET6 protocol family Oct 30 23:57:28.783780 kernel: Segment Routing with IPv6 Oct 30 23:57:28.783787 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 23:57:28.783794 kernel: NET: Registered PF_PACKET protocol family Oct 30 23:57:28.783801 kernel: Key type dns_resolver registered Oct 30 23:57:28.783808 kernel: registered taskstats version 1 Oct 30 23:57:28.783815 kernel: Loading compiled-in X.509 certificates Oct 30 23:57:28.783822 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 38254a600def0519df2d3cbc203d84bd87848ec9' Oct 30 23:57:28.783830 kernel: Demotion targets for Node 0: null Oct 30 23:57:28.783837 kernel: Key type .fscrypt registered Oct 30 23:57:28.783844 kernel: Key type fscrypt-provisioning registered Oct 30 23:57:28.783851 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 23:57:28.783858 kernel: ima: Allocated hash algorithm: sha1 Oct 30 23:57:28.783865 kernel: ima: No architecture policies found Oct 30 23:57:28.783871 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 30 23:57:28.783878 kernel: clk: Disabling unused clocks Oct 30 23:57:28.783885 kernel: PM: genpd: Disabling unused power domains Oct 30 23:57:28.783893 kernel: Warning: unable to open an initial console. Oct 30 23:57:28.783900 kernel: Freeing unused kernel memory: 38976K Oct 30 23:57:28.783907 kernel: Run /init as init process Oct 30 23:57:28.783914 kernel: with arguments: Oct 30 23:57:28.783921 kernel: /init Oct 30 23:57:28.783928 kernel: with environment: Oct 30 23:57:28.783934 kernel: HOME=/ Oct 30 23:57:28.783941 kernel: TERM=linux Oct 30 23:57:28.783949 systemd[1]: Successfully made /usr/ read-only. Oct 30 23:57:28.783960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:57:28.783968 systemd[1]: Detected virtualization kvm. Oct 30 23:57:28.783975 systemd[1]: Detected architecture arm64. Oct 30 23:57:28.783982 systemd[1]: Running in initrd. Oct 30 23:57:28.783989 systemd[1]: No hostname configured, using default hostname. Oct 30 23:57:28.783997 systemd[1]: Hostname set to . Oct 30 23:57:28.784004 systemd[1]: Initializing machine ID from VM UUID. Oct 30 23:57:28.784013 systemd[1]: Queued start job for default target initrd.target. Oct 30 23:57:28.784021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:57:28.784028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:57:28.784036 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 23:57:28.784044 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:57:28.784051 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 23:57:28.784059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 23:57:28.784069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 30 23:57:28.784077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 30 23:57:28.784084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:57:28.784092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:57:28.784099 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:57:28.784106 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:57:28.784114 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:57:28.784121 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:57:28.784130 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:57:28.784138 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:57:28.784145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 23:57:28.784153 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 23:57:28.784160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:57:28.784167 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:57:28.784175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:57:28.784182 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:57:28.784191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 23:57:28.784199 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:57:28.784206 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 23:57:28.784214 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 23:57:28.784222 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 23:57:28.784229 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:57:28.784237 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:57:28.784245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:57:28.784252 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 23:57:28.784262 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:57:28.784270 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 23:57:28.784278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 23:57:28.784300 systemd-journald[244]: Collecting audit messages is disabled. Oct 30 23:57:28.784320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:57:28.784329 systemd-journald[244]: Journal started Oct 30 23:57:28.784348 systemd-journald[244]: Runtime Journal (/run/log/journal/3bf18bba129a478b96b88334bfe98009) is 6M, max 48.5M, 42.4M free. Oct 30 23:57:28.790436 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 23:57:28.790463 kernel: Bridge firewalling registered Oct 30 23:57:28.774007 systemd-modules-load[246]: Inserted module 'overlay' Oct 30 23:57:28.788877 systemd-modules-load[246]: Inserted module 'br_netfilter' Oct 30 23:57:28.793829 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 23:57:28.797399 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:57:28.797716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:57:28.801025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:57:28.802832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:57:28.811546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 23:57:28.816660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:57:28.818701 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 23:57:28.819554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 23:57:28.821395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:57:28.825087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:57:28.834534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:57:28.839035 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:57:28.844405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:57:28.848735 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fe9a0b97d6cc3ae4bb51413f0dcf8829730d06a1c56255949a7891220815365c Oct 30 23:57:28.877635 systemd-resolved[291]: Positive Trust Anchors: Oct 30 23:57:28.877654 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:57:28.877690 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:57:28.882753 systemd-resolved[291]: Defaulting to hostname 'linux'. Oct 30 23:57:28.883780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:57:28.890654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:57:28.930410 kernel: SCSI subsystem initialized Oct 30 23:57:28.935399 kernel: Loading iSCSI transport class v2.0-870. Oct 30 23:57:28.942401 kernel: iscsi: registered transport (tcp) Oct 30 23:57:28.955439 kernel: iscsi: registered transport (qla4xxx) Oct 30 23:57:28.955479 kernel: QLogic iSCSI HBA Driver Oct 30 23:57:28.972764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 23:57:29.001456 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 23:57:29.003261 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 23:57:29.053286 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 23:57:29.055823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 23:57:29.117428 kernel: raid6: neonx8 gen() 15714 MB/s Oct 30 23:57:29.134447 kernel: raid6: neonx4 gen() 15663 MB/s Oct 30 23:57:29.151442 kernel: raid6: neonx2 gen() 13168 MB/s Oct 30 23:57:29.168441 kernel: raid6: neonx1 gen() 10410 MB/s Oct 30 23:57:29.185444 kernel: raid6: int64x8 gen() 6846 MB/s Oct 30 23:57:29.202442 kernel: raid6: int64x4 gen() 7350 MB/s Oct 30 23:57:29.219433 kernel: raid6: int64x2 gen() 6089 MB/s Oct 30 23:57:29.236682 kernel: raid6: int64x1 gen() 5041 MB/s Oct 30 23:57:29.236741 kernel: raid6: using algorithm neonx8 gen() 15714 MB/s Oct 30 23:57:29.254674 kernel: raid6: .... xor() 12065 MB/s, rmw enabled Oct 30 23:57:29.254822 kernel: raid6: using neon recovery algorithm Oct 30 23:57:29.260420 kernel: xor: measuring software checksum speed Oct 30 23:57:29.260470 kernel: 8regs : 20738 MB/sec Oct 30 23:57:29.260480 kernel: 32regs : 18432 MB/sec Oct 30 23:57:29.261689 kernel: arm64_neon : 27908 MB/sec Oct 30 23:57:29.261726 kernel: xor: using function: arm64_neon (27908 MB/sec) Oct 30 23:57:29.315415 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 23:57:29.321683 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:57:29.324328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:57:29.351005 systemd-udevd[498]: Using default interface naming scheme 'v255'. Oct 30 23:57:29.355106 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:57:29.357670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 23:57:29.392435 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Oct 30 23:57:29.417477 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:57:29.421350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:57:29.473398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:57:29.477786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 23:57:29.539399 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 30 23:57:29.539592 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 30 23:57:29.544970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:57:29.545111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:57:29.549498 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:57:29.556961 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 23:57:29.556987 kernel: GPT:9289727 != 19775487 Oct 30 23:57:29.556996 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 23:57:29.557013 kernel: GPT:9289727 != 19775487 Oct 30 23:57:29.557022 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 23:57:29.557030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 23:57:29.556694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:57:29.586009 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 23:57:29.587488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:57:29.596059 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 23:57:29.604769 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 23:57:29.611243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 23:57:29.612688 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 30 23:57:29.621951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 23:57:29.623555 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:57:29.627923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:57:29.630482 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:57:29.633697 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 23:57:29.635974 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 23:57:29.657431 disk-uuid[591]: Primary Header is updated. Oct 30 23:57:29.657431 disk-uuid[591]: Secondary Entries is updated. Oct 30 23:57:29.657431 disk-uuid[591]: Secondary Header is updated. Oct 30 23:57:29.658367 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:57:29.665442 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 23:57:30.672480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 23:57:30.673591 disk-uuid[597]: The operation has completed successfully. Oct 30 23:57:30.706167 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 23:57:30.706268 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 23:57:30.725074 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 30 23:57:30.747533 sh[611]: Success Oct 30 23:57:30.760839 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 23:57:30.760905 kernel: device-mapper: uevent: version 1.0.3 Oct 30 23:57:30.762387 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 23:57:30.770437 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 30 23:57:30.801094 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 30 23:57:30.803249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 30 23:57:30.816444 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 30 23:57:30.825247 kernel: BTRFS: device fsid f92dbd85-4118-4758-bb41-80b2b70966d3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (623) Oct 30 23:57:30.825293 kernel: BTRFS info (device dm-0): first mount of filesystem f92dbd85-4118-4758-bb41-80b2b70966d3 Oct 30 23:57:30.825309 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:57:30.831151 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 23:57:30.831183 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 23:57:30.832422 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 30 23:57:30.833885 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 23:57:30.835553 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 23:57:30.836425 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 23:57:30.839925 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 23:57:30.864426 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Oct 30 23:57:30.867664 kernel: BTRFS info (device vda6): first mount of filesystem 186d34a8-58e2-4a7d-a93b-654a23856822 Oct 30 23:57:30.867730 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:57:30.870901 kernel: BTRFS info (device vda6): turning on async discard Oct 30 23:57:30.870963 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 23:57:30.876405 kernel: BTRFS info (device vda6): last unmount of filesystem 186d34a8-58e2-4a7d-a93b-654a23856822 Oct 30 23:57:30.877471 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 23:57:30.880178 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 23:57:30.950178 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:57:30.953923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:57:30.989974 ignition[702]: Ignition 2.22.0 Oct 30 23:57:30.990896 ignition[702]: Stage: fetch-offline Oct 30 23:57:30.990952 ignition[702]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:30.990960 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:30.992661 systemd-networkd[800]: lo: Link UP Oct 30 23:57:30.991044 ignition[702]: parsed url from cmdline: "" Oct 30 23:57:30.992665 systemd-networkd[800]: lo: Gained carrier Oct 30 23:57:30.991047 ignition[702]: no config URL provided Oct 30 23:57:30.993504 systemd-networkd[800]: Enumeration completed Oct 30 23:57:30.991052 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 23:57:30.993642 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:57:30.991058 ignition[702]: no config at "/usr/lib/ignition/user.ign" Oct 30 23:57:30.994062 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:57:30.991077 ignition[702]: op(1): [started] loading QEMU firmware config module Oct 30 23:57:30.994066 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:57:30.991082 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 23:57:30.994831 systemd-networkd[800]: eth0: Link UP Oct 30 23:57:31.002398 ignition[702]: op(1): [finished] loading QEMU firmware config module Oct 30 23:57:30.994949 systemd-networkd[800]: eth0: Gained carrier Oct 30 23:57:30.994961 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:57:30.996636 systemd[1]: Reached target network.target - Network. Oct 30 23:57:31.017445 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 23:57:31.058226 ignition[702]: parsing config with SHA512: 9d41c200369fe965419e6067eb6edd0e2c34d8c460179b41bd63c9d21a856a80905ed1fb6b2a16cf88b559454e0b1216dcb77af7c1d0c44ebf294a790334ccc7 Oct 30 23:57:31.063981 unknown[702]: fetched base config from "system" Oct 30 23:57:31.063996 unknown[702]: fetched user config from "qemu" Oct 30 23:57:31.064361 ignition[702]: fetch-offline: fetch-offline passed Oct 30 23:57:31.066279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:57:31.064450 ignition[702]: Ignition finished successfully Oct 30 23:57:31.068013 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 23:57:31.068769 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 23:57:31.096849 ignition[808]: Ignition 2.22.0 Oct 30 23:57:31.096868 ignition[808]: Stage: kargs Oct 30 23:57:31.097011 ignition[808]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:31.097020 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:31.097804 ignition[808]: kargs: kargs passed Oct 30 23:57:31.101092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 23:57:31.097849 ignition[808]: Ignition finished successfully Oct 30 23:57:31.103238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 23:57:31.129877 ignition[816]: Ignition 2.22.0 Oct 30 23:57:31.129895 ignition[816]: Stage: disks Oct 30 23:57:31.130021 ignition[816]: no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:31.130029 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:31.130797 ignition[816]: disks: disks passed Oct 30 23:57:31.133484 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 23:57:31.130841 ignition[816]: Ignition finished successfully Oct 30 23:57:31.135892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 23:57:31.137448 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 23:57:31.139658 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:57:31.141311 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:57:31.143520 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:57:31.146347 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 23:57:31.167982 systemd-resolved[291]: Detected conflict on linux IN A 10.0.0.93 Oct 30 23:57:31.167999 systemd-resolved[291]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 30 23:57:31.170833 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 30 23:57:31.263972 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 23:57:31.266771 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 23:57:31.333407 kernel: EXT4-fs (vda9): mounted filesystem 8b3ccddf-a30d-4ebb-ae71-ca86224f5ff5 r/w with ordered data mode. Quota mode: none. Oct 30 23:57:31.333550 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 23:57:31.334954 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 23:57:31.337901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:57:31.340196 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 23:57:31.341314 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 23:57:31.341357 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 23:57:31.341409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:57:31.356622 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 23:57:31.360303 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 23:57:31.363311 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Oct 30 23:57:31.365404 kernel: BTRFS info (device vda6): first mount of filesystem 186d34a8-58e2-4a7d-a93b-654a23856822 Oct 30 23:57:31.365433 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:57:31.367713 kernel: BTRFS info (device vda6): turning on async discard Oct 30 23:57:31.367732 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 23:57:31.370113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:57:31.411728 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 23:57:31.416064 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Oct 30 23:57:31.420297 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 23:57:31.423286 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 23:57:31.495053 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 23:57:31.497409 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 23:57:31.499108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 23:57:31.517607 kernel: BTRFS info (device vda6): last unmount of filesystem 186d34a8-58e2-4a7d-a93b-654a23856822 Oct 30 23:57:31.538569 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 23:57:31.555243 ignition[950]: INFO : Ignition 2.22.0 Oct 30 23:57:31.555243 ignition[950]: INFO : Stage: mount Oct 30 23:57:31.558521 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:31.558521 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:31.558521 ignition[950]: INFO : mount: mount passed Oct 30 23:57:31.558521 ignition[950]: INFO : Ignition finished successfully Oct 30 23:57:31.557955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 23:57:31.560476 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 23:57:31.823181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 23:57:31.824937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 23:57:31.843141 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Oct 30 23:57:31.843192 kernel: BTRFS info (device vda6): first mount of filesystem 186d34a8-58e2-4a7d-a93b-654a23856822 Oct 30 23:57:31.843203 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 23:57:31.846965 kernel: BTRFS info (device vda6): turning on async discard Oct 30 23:57:31.846995 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 23:57:31.848500 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 23:57:31.877984 ignition[979]: INFO : Ignition 2.22.0 Oct 30 23:57:31.877984 ignition[979]: INFO : Stage: files Oct 30 23:57:31.879877 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:31.879877 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:31.879877 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Oct 30 23:57:31.879877 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 23:57:31.879877 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 23:57:31.886583 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 23:57:31.886583 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 23:57:31.886583 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 23:57:31.886583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:57:31.886583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 30 23:57:31.882971 unknown[979]: wrote ssh authorized keys file for user: core Oct 30 23:57:32.107315 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 23:57:32.147588 systemd-networkd[800]: eth0: Gained IPv6LL Oct 30 23:57:32.349835 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:57:32.352666 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:57:32.368115 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 30 23:57:32.675864 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 23:57:33.031573 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 30 23:57:33.031573 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 23:57:33.035919 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 23:57:33.038199 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 23:57:33.056777 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 23:57:33.060325 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 23:57:33.062320 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 23:57:33.062320 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 23:57:33.062320 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 23:57:33.062320 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:57:33.062320 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 23:57:33.062320 ignition[979]: INFO : files: files passed Oct 30 23:57:33.062320 ignition[979]: INFO : Ignition finished successfully Oct 30 23:57:33.064431 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 23:57:33.067951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 23:57:33.070295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 23:57:33.093272 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 23:57:33.093406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 23:57:33.097479 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 23:57:33.098981 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:57:33.098981 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:57:33.102681 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 23:57:33.101749 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:57:33.104162 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 23:57:33.107455 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 23:57:33.182804 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 23:57:33.182954 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 23:57:33.185754 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 23:57:33.187984 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 23:57:33.190225 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 23:57:33.191241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 23:57:33.214132 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:57:33.217107 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 23:57:33.236177 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:57:33.237640 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:57:33.239961 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 23:57:33.242011 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 23:57:33.242144 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 23:57:33.245050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 23:57:33.247357 systemd[1]: Stopped target basic.target - Basic System. Oct 30 23:57:33.249415 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 23:57:33.251464 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 23:57:33.253764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 23:57:33.256629 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 23:57:33.259589 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 23:57:33.261691 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 23:57:33.264287 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 23:57:33.266592 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 23:57:33.268506 systemd[1]: Stopped target swap.target - Swaps. Oct 30 23:57:33.270395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 23:57:33.270542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 23:57:33.273363 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:57:33.275626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:57:33.277743 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 23:57:33.277874 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:57:33.280097 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 23:57:33.280237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 23:57:33.283617 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 23:57:33.283746 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 23:57:33.285896 systemd[1]: Stopped target paths.target - Path Units. Oct 30 23:57:33.287689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 23:57:33.287817 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:57:33.289906 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 23:57:33.291997 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 23:57:33.293738 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 23:57:33.293830 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 23:57:33.295726 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 23:57:33.295805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 23:57:33.298249 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 23:57:33.298373 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 23:57:33.300508 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 23:57:33.300636 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 23:57:33.303325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 23:57:33.306209 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 23:57:33.306359 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:57:33.317798 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 23:57:33.318824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 23:57:33.318977 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:57:33.321496 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 23:57:33.321628 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 23:57:33.327798 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 23:57:33.328476 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 23:57:33.333610 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 23:57:33.340048 ignition[1034]: INFO : Ignition 2.22.0 Oct 30 23:57:33.340048 ignition[1034]: INFO : Stage: umount Oct 30 23:57:33.342016 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 23:57:33.342016 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 23:57:33.342016 ignition[1034]: INFO : umount: umount passed Oct 30 23:57:33.342016 ignition[1034]: INFO : Ignition finished successfully Oct 30 23:57:33.344849 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 23:57:33.344948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 23:57:33.346621 systemd[1]: Stopped target network.target - Network. Oct 30 23:57:33.348345 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 23:57:33.348465 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 23:57:33.350602 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 23:57:33.350664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 23:57:33.352895 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 23:57:33.352951 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 23:57:33.355059 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 23:57:33.355106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 23:57:33.357423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 23:57:33.359597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 23:57:33.368332 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 23:57:33.368465 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 23:57:33.372299 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 30 23:57:33.372566 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 23:57:33.372673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 23:57:33.377217 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 30 23:57:33.378104 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 23:57:33.379633 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 23:57:33.379671 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:57:33.383017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 23:57:33.384409 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 23:57:33.384478 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 23:57:33.387600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 23:57:33.387655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:57:33.391133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 23:57:33.391184 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 23:57:33.394074 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 23:57:33.394129 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:57:33.404442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:57:33.408170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 30 23:57:33.408236 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:57:33.408821 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 23:57:33.408907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 23:57:33.411286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 23:57:33.411374 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 23:57:33.424107 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 23:57:33.424274 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:57:33.426744 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 23:57:33.426829 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 23:57:33.429215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 23:57:33.429288 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 23:57:33.430630 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 23:57:33.430662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:57:33.433214 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 23:57:33.433271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 23:57:33.436467 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 23:57:33.436525 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 23:57:33.438924 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 23:57:33.438985 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 23:57:33.443717 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 23:57:33.445045 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 23:57:33.445104 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 23:57:33.448198 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 23:57:33.448239 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:57:33.451444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 23:57:33.451488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:57:33.455990 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 30 23:57:33.456039 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 30 23:57:33.456071 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 30 23:57:33.462728 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 23:57:33.462835 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 23:57:33.465165 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 23:57:33.468102 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 23:57:33.478594 systemd[1]: Switching root. Oct 30 23:57:33.517737 systemd-journald[244]: Journal stopped Oct 30 23:57:34.359611 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Oct 30 23:57:34.359660 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 23:57:34.359672 kernel: SELinux: policy capability open_perms=1 Oct 30 23:57:34.359685 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 23:57:34.359698 kernel: SELinux: policy capability always_check_network=0 Oct 30 23:57:34.359710 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 23:57:34.359720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 23:57:34.359730 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 23:57:34.359738 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 23:57:34.359747 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 23:57:34.359757 kernel: audit: type=1403 audit(1761868653.733:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 23:57:34.359770 systemd[1]: Successfully loaded SELinux policy in 63.220ms. Oct 30 23:57:34.359785 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.001ms. Oct 30 23:57:34.359797 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 23:57:34.359810 systemd[1]: Detected virtualization kvm. Oct 30 23:57:34.359836 systemd[1]: Detected architecture arm64. Oct 30 23:57:34.359892 systemd[1]: Detected first boot. Oct 30 23:57:34.359905 systemd[1]: Initializing machine ID from VM UUID. Oct 30 23:57:34.359916 zram_generator::config[1081]: No configuration found. Oct 30 23:57:34.359927 kernel: NET: Registered PF_VSOCK protocol family Oct 30 23:57:34.359937 systemd[1]: Populated /etc with preset unit settings. Oct 30 23:57:34.359950 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 30 23:57:34.359960 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 23:57:34.359971 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 23:57:34.359981 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 23:57:34.359991 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 23:57:34.360001 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 23:57:34.360011 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 23:57:34.360021 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 23:57:34.360031 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 23:57:34.360041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 23:57:34.360051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 23:57:34.360063 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 23:57:34.360073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 23:57:34.360083 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 23:57:34.360094 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 23:57:34.360103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 23:57:34.360114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 23:57:34.360124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 23:57:34.360134 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 30 23:57:34.360144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 23:57:34.360155 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 23:57:34.360165 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 23:57:34.360175 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 23:57:34.360185 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 23:57:34.360195 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 23:57:34.360205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 23:57:34.360215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 23:57:34.360225 systemd[1]: Reached target slices.target - Slice Units. Oct 30 23:57:34.360236 systemd[1]: Reached target swap.target - Swaps. Oct 30 23:57:34.360246 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 23:57:34.360257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 23:57:34.360267 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 23:57:34.360277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 23:57:34.360287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 23:57:34.360296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 23:57:34.360306 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 23:57:34.360316 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 23:57:34.360327 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 23:57:34.360337 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 23:57:34.360347 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 23:57:34.360357 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 23:57:34.360367 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 23:57:34.360377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 23:57:34.360396 systemd[1]: Reached target machines.target - Containers. Oct 30 23:57:34.360406 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 23:57:34.360418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:57:34.360428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 23:57:34.360438 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 23:57:34.360448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:57:34.360459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:57:34.360469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:57:34.360479 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 23:57:34.360489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:57:34.360500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 23:57:34.360511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 23:57:34.360520 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 23:57:34.360530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 23:57:34.360540 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 23:57:34.360556 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:57:34.360567 kernel: fuse: init (API version 7.41) Oct 30 23:57:34.360577 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 23:57:34.360586 kernel: loop: module loaded Oct 30 23:57:34.360597 kernel: ACPI: bus type drm_connector registered Oct 30 23:57:34.360606 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 23:57:34.360617 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 23:57:34.360626 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 23:57:34.360636 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 23:57:34.360646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 23:57:34.360657 systemd[1]: verity-setup.service: Deactivated successfully. Oct 30 23:57:34.360668 systemd[1]: Stopped verity-setup.service. Oct 30 23:57:34.360678 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 23:57:34.360687 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 23:57:34.360718 systemd-journald[1160]: Collecting audit messages is disabled. Oct 30 23:57:34.360739 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 23:57:34.360750 systemd-journald[1160]: Journal started Oct 30 23:57:34.360772 systemd-journald[1160]: Runtime Journal (/run/log/journal/3bf18bba129a478b96b88334bfe98009) is 6M, max 48.5M, 42.4M free. Oct 30 23:57:34.112472 systemd[1]: Queued start job for default target multi-user.target. Oct 30 23:57:34.136592 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 23:57:34.137039 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 23:57:34.363776 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 23:57:34.364409 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 23:57:34.365674 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 23:57:34.366992 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 23:57:34.369415 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 23:57:34.370907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 23:57:34.372517 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 23:57:34.372708 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 23:57:34.374254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:57:34.374431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:57:34.375856 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:57:34.376027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:57:34.377564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:57:34.377713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:57:34.379208 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 23:57:34.379366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 23:57:34.380923 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:57:34.381088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:57:34.382612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 23:57:34.384265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 23:57:34.386124 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 23:57:34.387808 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 23:57:34.400227 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 23:57:34.402718 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 23:57:34.404775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 23:57:34.406085 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 23:57:34.406128 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 23:57:34.408144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 23:57:34.417222 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 23:57:34.418578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:57:34.419620 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 23:57:34.421633 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 23:57:34.423036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:57:34.423958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 23:57:34.425275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:57:34.426570 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 23:57:34.429527 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 23:57:34.434347 systemd-journald[1160]: Time spent on flushing to /var/log/journal/3bf18bba129a478b96b88334bfe98009 is 16.665ms for 887 entries. Oct 30 23:57:34.434347 systemd-journald[1160]: System Journal (/var/log/journal/3bf18bba129a478b96b88334bfe98009) is 8M, max 195.6M, 187.6M free. Oct 30 23:57:34.458040 systemd-journald[1160]: Received client request to flush runtime journal. Oct 30 23:57:34.435943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 23:57:34.439619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 23:57:34.441450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 23:57:34.442963 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 23:57:34.446750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 23:57:34.450835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 23:57:34.453432 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 23:57:34.461430 kernel: loop0: detected capacity change from 0 to 119368 Oct 30 23:57:34.466711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 23:57:34.468668 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 23:57:34.475402 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 23:57:34.483959 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 23:57:34.486740 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 23:57:34.507404 kernel: loop1: detected capacity change from 0 to 207008 Oct 30 23:57:34.520246 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Oct 30 23:57:34.520264 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Oct 30 23:57:34.524163 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 23:57:34.532402 kernel: loop2: detected capacity change from 0 to 100632 Oct 30 23:57:34.534085 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 23:57:34.578405 kernel: loop3: detected capacity change from 0 to 119368 Oct 30 23:57:34.584587 kernel: loop4: detected capacity change from 0 to 207008 Oct 30 23:57:34.590571 kernel: loop5: detected capacity change from 0 to 100632 Oct 30 23:57:34.594630 (sd-merge)[1220]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 30 23:57:34.595001 (sd-merge)[1220]: Merged extensions into '/usr'. Oct 30 23:57:34.598571 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 23:57:34.598591 systemd[1]: Reloading... Oct 30 23:57:34.649422 zram_generator::config[1245]: No configuration found. Oct 30 23:57:34.705845 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 23:57:34.798506 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 23:57:34.798681 systemd[1]: Reloading finished in 199 ms. Oct 30 23:57:34.833465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 23:57:34.835106 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 23:57:34.855647 systemd[1]: Starting ensure-sysext.service... Oct 30 23:57:34.857628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 23:57:34.866169 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Oct 30 23:57:34.866301 systemd[1]: Reloading... Oct 30 23:57:34.870881 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 23:57:34.870913 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 23:57:34.871152 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 23:57:34.871332 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 23:57:34.871955 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 23:57:34.872157 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 30 23:57:34.872204 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Oct 30 23:57:34.875186 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:57:34.875203 systemd-tmpfiles[1283]: Skipping /boot Oct 30 23:57:34.880891 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 23:57:34.880906 systemd-tmpfiles[1283]: Skipping /boot Oct 30 23:57:34.910433 zram_generator::config[1309]: No configuration found. Oct 30 23:57:35.039495 systemd[1]: Reloading finished in 172 ms. Oct 30 23:57:35.057888 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 23:57:35.063656 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 23:57:35.071457 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:57:35.074103 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 23:57:35.087305 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 23:57:35.090555 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 23:57:35.095606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 23:57:35.098539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 23:57:35.105314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:57:35.106813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:57:35.112672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:57:35.117824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:57:35.120036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:57:35.120165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:57:35.123102 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 23:57:35.125944 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 23:57:35.128476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:57:35.134684 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:57:35.136706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:57:35.136866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:57:35.138983 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Oct 30 23:57:35.139272 augenrules[1375]: No rules Oct 30 23:57:35.140410 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:57:35.140666 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:57:35.142605 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:57:35.142807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:57:35.151112 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 23:57:35.155707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:57:35.157023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:57:35.161605 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:57:35.169270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 23:57:35.170639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:57:35.170763 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:57:35.172893 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 23:57:35.174048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 23:57:35.174960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 23:57:35.176744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 23:57:35.178980 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 23:57:35.181015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:57:35.181188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:57:35.187170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:57:35.187932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:57:35.195071 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 23:57:35.196474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 23:57:35.198274 systemd[1]: Finished ensure-sysext.service. Oct 30 23:57:35.201335 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 23:57:35.213765 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:57:35.215123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 23:57:35.216456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 23:57:35.222018 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 23:57:35.228659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 23:57:35.231667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 23:57:35.231718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 23:57:35.247150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 23:57:35.253603 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 23:57:35.256468 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 23:57:35.257074 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 23:57:35.257281 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 23:57:35.263829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 23:57:35.264529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 23:57:35.267852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 23:57:35.268037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 23:57:35.272480 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 30 23:57:35.291203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 23:57:35.299207 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 23:57:35.300588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 23:57:35.300664 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 23:57:35.301875 augenrules[1428]: /sbin/augenrules: No change Oct 30 23:57:35.315894 augenrules[1463]: No rules Oct 30 23:57:35.317730 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:57:35.318063 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:57:35.323695 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 23:57:35.365285 systemd-resolved[1349]: Positive Trust Anchors: Oct 30 23:57:35.365300 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 23:57:35.365332 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 23:57:35.371375 systemd-networkd[1434]: lo: Link UP Oct 30 23:57:35.371404 systemd-networkd[1434]: lo: Gained carrier Oct 30 23:57:35.371918 systemd-resolved[1349]: Defaulting to hostname 'linux'. Oct 30 23:57:35.372162 systemd-networkd[1434]: Enumeration completed Oct 30 23:57:35.372282 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 23:57:35.372602 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:57:35.372612 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 23:57:35.373207 systemd-networkd[1434]: eth0: Link UP Oct 30 23:57:35.373319 systemd-networkd[1434]: eth0: Gained carrier Oct 30 23:57:35.373337 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 30 23:57:35.374342 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 23:57:35.375773 systemd[1]: Reached target network.target - Network. Oct 30 23:57:35.376882 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 23:57:35.379703 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 23:57:35.382247 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 23:57:35.385564 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 23:57:35.386454 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 23:57:35.387022 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 23:57:35.388243 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 23:57:35.388969 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Oct 30 23:57:35.389640 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 23:57:35.391218 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 23:57:35.391307 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 23:57:35.391351 systemd-timesyncd[1435]: Initial clock synchronization to Thu 2025-10-30 23:57:35.457850 UTC. Oct 30 23:57:35.392949 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 23:57:35.392980 systemd[1]: Reached target paths.target - Path Units. Oct 30 23:57:35.394146 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 23:57:35.397586 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 23:57:35.398844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 23:57:35.400170 systemd[1]: Reached target timers.target - Timer Units. Oct 30 23:57:35.401908 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 23:57:35.415972 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 23:57:35.420707 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 23:57:35.424769 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 23:57:35.426322 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 23:57:35.433566 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 23:57:35.435236 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 23:57:35.437526 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 23:57:35.439019 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 23:57:35.448419 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 23:57:35.449504 systemd[1]: Reached target basic.target - Basic System. Oct 30 23:57:35.450529 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:57:35.450575 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 23:57:35.451625 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 23:57:35.453824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 23:57:35.455805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 23:57:35.467400 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 23:57:35.470420 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 23:57:35.471524 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 23:57:35.472561 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 23:57:35.474607 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 23:57:35.476884 jq[1494]: false Oct 30 23:57:35.478557 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 23:57:35.480818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 23:57:35.484688 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 23:57:35.487378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 23:57:35.489653 extend-filesystems[1495]: Found /dev/vda6 Oct 30 23:57:35.491411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 23:57:35.492221 extend-filesystems[1495]: Found /dev/vda9 Oct 30 23:57:35.494679 extend-filesystems[1495]: Checking size of /dev/vda9 Oct 30 23:57:35.496680 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 23:57:35.497514 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 23:57:35.499454 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 23:57:35.505358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 23:57:35.507157 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 23:57:35.507356 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 23:57:35.507633 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 23:57:35.510061 jq[1516]: true Oct 30 23:57:35.507842 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 23:57:35.510984 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 23:57:35.511229 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 23:57:35.515878 extend-filesystems[1495]: Resized partition /dev/vda9 Oct 30 23:57:35.520134 extend-filesystems[1528]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 23:57:35.530887 update_engine[1514]: I20251030 23:57:35.530670 1514 main.cc:92] Flatcar Update Engine starting Oct 30 23:57:35.535416 jq[1525]: true Oct 30 23:57:35.539496 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 23:57:35.571323 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) Oct 30 23:57:35.572922 systemd-logind[1507]: New seat seat0. Oct 30 23:57:35.574399 tar[1522]: linux-arm64/LICENSE Oct 30 23:57:35.574399 tar[1522]: linux-arm64/helm Oct 30 23:57:35.575448 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 23:57:35.584955 dbus-daemon[1492]: [system] SELinux support is enabled Oct 30 23:57:35.585123 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 23:57:35.588492 update_engine[1514]: I20251030 23:57:35.588428 1514 update_check_scheduler.cc:74] Next update check in 3m48s Oct 30 23:57:35.590935 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 23:57:35.591692 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 30 23:57:35.590967 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 23:57:35.592695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 23:57:35.592791 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 23:57:35.596331 systemd[1]: Started update-engine.service - Update Engine. Oct 30 23:57:35.600679 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 30 23:57:35.603646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 23:57:35.635270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 23:57:35.643405 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 30 23:57:35.644072 bash[1554]: Updated "/home/core/.ssh/authorized_keys" Oct 30 23:57:35.645672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 23:57:35.648688 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 23:57:35.654946 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 23:57:35.661230 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 23:57:35.661230 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 23:57:35.661230 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 30 23:57:35.667332 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Oct 30 23:57:35.663998 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 23:57:35.664310 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 23:57:35.741395 containerd[1536]: time="2025-10-30T23:57:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 23:57:35.741657 containerd[1536]: time="2025-10-30T23:57:35.741371640Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 23:57:35.752456 containerd[1536]: time="2025-10-30T23:57:35.752397760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.12µs" Oct 30 23:57:35.752456 containerd[1536]: time="2025-10-30T23:57:35.752444760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 23:57:35.752456 containerd[1536]: time="2025-10-30T23:57:35.752466040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 23:57:35.752667 containerd[1536]: time="2025-10-30T23:57:35.752644360Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 23:57:35.752697 containerd[1536]: time="2025-10-30T23:57:35.752668720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 23:57:35.752715 containerd[1536]: time="2025-10-30T23:57:35.752697160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 23:57:35.752770 containerd[1536]: time="2025-10-30T23:57:35.752748640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 23:57:35.752770 containerd[1536]: time="2025-10-30T23:57:35.752765920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753028 containerd[1536]: time="2025-10-30T23:57:35.753003520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753028 containerd[1536]: time="2025-10-30T23:57:35.753024760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753073 containerd[1536]: time="2025-10-30T23:57:35.753037480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753073 containerd[1536]: time="2025-10-30T23:57:35.753046920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753207 containerd[1536]: time="2025-10-30T23:57:35.753122120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753419 containerd[1536]: time="2025-10-30T23:57:35.753317680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753419 containerd[1536]: time="2025-10-30T23:57:35.753367000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 23:57:35.753419 containerd[1536]: time="2025-10-30T23:57:35.753395040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 23:57:35.753517 containerd[1536]: time="2025-10-30T23:57:35.753431480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 23:57:35.753698 containerd[1536]: time="2025-10-30T23:57:35.753662960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 23:57:35.753750 containerd[1536]: time="2025-10-30T23:57:35.753735760Z" level=info msg="metadata content store policy set" policy=shared Oct 30 23:57:35.758464 containerd[1536]: time="2025-10-30T23:57:35.758420200Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 23:57:35.758608 containerd[1536]: time="2025-10-30T23:57:35.758527200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 23:57:35.758608 containerd[1536]: time="2025-10-30T23:57:35.758600880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 23:57:35.758655 containerd[1536]: time="2025-10-30T23:57:35.758618160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 23:57:35.758655 containerd[1536]: time="2025-10-30T23:57:35.758631240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 23:57:35.758738 containerd[1536]: time="2025-10-30T23:57:35.758684920Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 23:57:35.758738 containerd[1536]: time="2025-10-30T23:57:35.758707680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 23:57:35.758738 containerd[1536]: time="2025-10-30T23:57:35.758733160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 23:57:35.758842 containerd[1536]: time="2025-10-30T23:57:35.758786040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 23:57:35.758868 containerd[1536]: time="2025-10-30T23:57:35.758846320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 23:57:35.758868 containerd[1536]: time="2025-10-30T23:57:35.758859160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 23:57:35.758900 containerd[1536]: time="2025-10-30T23:57:35.758878120Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 23:57:35.759207 containerd[1536]: time="2025-10-30T23:57:35.759127560Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 23:57:35.759237 containerd[1536]: time="2025-10-30T23:57:35.759205480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 23:57:35.759280 containerd[1536]: time="2025-10-30T23:57:35.759262920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 23:57:35.759302 containerd[1536]: time="2025-10-30T23:57:35.759283720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 23:57:35.759302 containerd[1536]: time="2025-10-30T23:57:35.759297160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 23:57:35.759359 containerd[1536]: time="2025-10-30T23:57:35.759307880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 23:57:35.759393 containerd[1536]: time="2025-10-30T23:57:35.759366000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 23:57:35.759463 containerd[1536]: time="2025-10-30T23:57:35.759447560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 23:57:35.759489 containerd[1536]: time="2025-10-30T23:57:35.759469200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 23:57:35.759594 containerd[1536]: time="2025-10-30T23:57:35.759557520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 23:57:35.759594 containerd[1536]: time="2025-10-30T23:57:35.759581720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 23:57:35.760031 containerd[1536]: time="2025-10-30T23:57:35.759995640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 23:57:35.760031 containerd[1536]: time="2025-10-30T23:57:35.760028720Z" level=info msg="Start snapshots syncer" Oct 30 23:57:35.760093 containerd[1536]: time="2025-10-30T23:57:35.760057800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 23:57:35.760357 containerd[1536]: time="2025-10-30T23:57:35.760275880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 23:57:35.760357 containerd[1536]: time="2025-10-30T23:57:35.760333360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 23:57:35.760496 containerd[1536]: time="2025-10-30T23:57:35.760419320Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760529400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760568280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760582040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760598760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760611400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760621680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760632200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 23:57:35.760652 containerd[1536]: time="2025-10-30T23:57:35.760655760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760666840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760677440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760701520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760716400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760725680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760736600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760745440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760755160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 23:57:35.760800 containerd[1536]: time="2025-10-30T23:57:35.760765600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 23:57:35.760943 containerd[1536]: time="2025-10-30T23:57:35.760877600Z" level=info msg="runtime interface created" Oct 30 23:57:35.760943 containerd[1536]: time="2025-10-30T23:57:35.760884840Z" level=info msg="created NRI interface" Oct 30 23:57:35.760943 containerd[1536]: time="2025-10-30T23:57:35.760893440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 23:57:35.760943 containerd[1536]: time="2025-10-30T23:57:35.760904440Z" level=info msg="Connect containerd service" Oct 30 23:57:35.760943 containerd[1536]: time="2025-10-30T23:57:35.760930560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 23:57:35.762832 containerd[1536]: time="2025-10-30T23:57:35.762632560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 23:57:35.835609 containerd[1536]: time="2025-10-30T23:57:35.835506240Z" level=info msg="Start subscribing containerd event" Oct 30 23:57:35.835759 containerd[1536]: time="2025-10-30T23:57:35.835741720Z" level=info msg="Start recovering state" Oct 30 23:57:35.835888 containerd[1536]: time="2025-10-30T23:57:35.835874560Z" level=info msg="Start event monitor" Oct 30 23:57:35.835952 containerd[1536]: time="2025-10-30T23:57:35.835939680Z" level=info msg="Start cni network conf syncer for default" Oct 30 23:57:35.836024 containerd[1536]: time="2025-10-30T23:57:35.836010520Z" level=info msg="Start streaming server" Oct 30 23:57:35.836079 containerd[1536]: time="2025-10-30T23:57:35.836069040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 23:57:35.836147 containerd[1536]: time="2025-10-30T23:57:35.836137240Z" level=info msg="runtime interface starting up..." Oct 30 23:57:35.836188 containerd[1536]: time="2025-10-30T23:57:35.836178920Z" level=info msg="starting plugins..." Oct 30 23:57:35.836240 containerd[1536]: time="2025-10-30T23:57:35.836229120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 23:57:35.836335 containerd[1536]: time="2025-10-30T23:57:35.836011520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 23:57:35.836415 containerd[1536]: time="2025-10-30T23:57:35.836371200Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 23:57:35.836558 containerd[1536]: time="2025-10-30T23:57:35.836531400Z" level=info msg="containerd successfully booted in 0.096191s" Oct 30 23:57:35.836646 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 23:57:35.915882 tar[1522]: linux-arm64/README.md Oct 30 23:57:35.933828 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 23:57:35.990910 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 23:57:36.011365 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 23:57:36.015277 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 23:57:36.041138 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 23:57:36.041370 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 23:57:36.044194 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 23:57:36.064381 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 23:57:36.067346 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 23:57:36.069671 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 30 23:57:36.071136 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 23:57:36.435772 systemd-networkd[1434]: eth0: Gained IPv6LL Oct 30 23:57:36.439101 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 23:57:36.441006 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 23:57:36.443521 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 23:57:36.446047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:36.467048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 23:57:36.483984 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 23:57:36.485442 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 23:57:36.487981 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 23:57:36.490672 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 23:57:37.017875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:37.019680 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 23:57:37.021838 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:37.025478 systemd[1]: Startup finished in 2.072s (kernel) + 5.112s (initrd) + 3.356s (userspace) = 10.541s. Oct 30 23:57:37.381939 kubelet[1629]: E1030 23:57:37.381809 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:37.384360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:37.384514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:37.385520 systemd[1]: kubelet.service: Consumed 738ms CPU time, 257.2M memory peak. Oct 30 23:57:41.873134 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 23:57:41.877042 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:44752.service - OpenSSH per-connection server daemon (10.0.0.1:44752). Oct 30 23:57:41.952291 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 44752 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:41.953768 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:41.960245 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 23:57:41.961241 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 23:57:41.972104 systemd-logind[1507]: New session 1 of user core. Oct 30 23:57:41.984483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 23:57:41.988433 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 23:57:42.016477 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 23:57:42.018623 systemd-logind[1507]: New session c1 of user core. Oct 30 23:57:42.123532 systemd[1648]: Queued start job for default target default.target. Oct 30 23:57:42.147464 systemd[1648]: Created slice app.slice - User Application Slice. Oct 30 23:57:42.147494 systemd[1648]: Reached target paths.target - Paths. Oct 30 23:57:42.147533 systemd[1648]: Reached target timers.target - Timers. Oct 30 23:57:42.149174 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 23:57:42.159914 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 23:57:42.160039 systemd[1648]: Reached target sockets.target - Sockets. Oct 30 23:57:42.160091 systemd[1648]: Reached target basic.target - Basic System. Oct 30 23:57:42.160120 systemd[1648]: Reached target default.target - Main User Target. Oct 30 23:57:42.160149 systemd[1648]: Startup finished in 135ms. Oct 30 23:57:42.160250 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 23:57:42.162019 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 23:57:42.230013 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:44756.service - OpenSSH per-connection server daemon (10.0.0.1:44756). Oct 30 23:57:42.280772 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 44756 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:42.282149 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:42.286302 systemd-logind[1507]: New session 2 of user core. Oct 30 23:57:42.300343 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 23:57:42.353046 sshd[1662]: Connection closed by 10.0.0.1 port 44756 Oct 30 23:57:42.354556 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Oct 30 23:57:42.370179 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:44756.service: Deactivated successfully. Oct 30 23:57:42.372367 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 23:57:42.375969 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Oct 30 23:57:42.378029 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:44768.service - OpenSSH per-connection server daemon (10.0.0.1:44768). Oct 30 23:57:42.378545 systemd-logind[1507]: Removed session 2. Oct 30 23:57:42.435733 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 44768 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:42.437030 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:42.441198 systemd-logind[1507]: New session 3 of user core. Oct 30 23:57:42.451570 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 23:57:42.499576 sshd[1672]: Connection closed by 10.0.0.1 port 44768 Oct 30 23:57:42.499883 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Oct 30 23:57:42.520846 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:44768.service: Deactivated successfully. Oct 30 23:57:42.524110 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 23:57:42.525294 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Oct 30 23:57:42.528459 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:44772.service - OpenSSH per-connection server daemon (10.0.0.1:44772). Oct 30 23:57:42.528910 systemd-logind[1507]: Removed session 3. Oct 30 23:57:42.585781 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 44772 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:42.586997 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:42.590943 systemd-logind[1507]: New session 4 of user core. Oct 30 23:57:42.597584 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 23:57:42.651513 sshd[1681]: Connection closed by 10.0.0.1 port 44772 Oct 30 23:57:42.651563 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Oct 30 23:57:42.663905 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:44772.service: Deactivated successfully. Oct 30 23:57:42.666370 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 23:57:42.667123 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Oct 30 23:57:42.670646 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:44778.service - OpenSSH per-connection server daemon (10.0.0.1:44778). Oct 30 23:57:42.671131 systemd-logind[1507]: Removed session 4. Oct 30 23:57:42.738220 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 44778 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:42.739496 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:42.744436 systemd-logind[1507]: New session 5 of user core. Oct 30 23:57:42.754620 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 23:57:42.813987 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 23:57:42.814754 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:57:42.827269 sudo[1691]: pam_unix(sudo:session): session closed for user root Oct 30 23:57:42.830273 sshd[1690]: Connection closed by 10.0.0.1 port 44778 Oct 30 23:57:42.829438 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Oct 30 23:57:42.842736 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:44778.service: Deactivated successfully. Oct 30 23:57:42.844312 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 23:57:42.845106 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Oct 30 23:57:42.848459 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:44790.service - OpenSSH per-connection server daemon (10.0.0.1:44790). Oct 30 23:57:42.849115 systemd-logind[1507]: Removed session 5. Oct 30 23:57:42.916527 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 44790 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:42.914511 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:42.921012 systemd-logind[1507]: New session 6 of user core. Oct 30 23:57:42.934961 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 23:57:42.986365 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 23:57:42.986646 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:57:43.059879 sudo[1702]: pam_unix(sudo:session): session closed for user root Oct 30 23:57:43.065889 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 23:57:43.066156 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:57:43.074980 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 23:57:43.120598 augenrules[1724]: No rules Oct 30 23:57:43.121871 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 23:57:43.122062 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 23:57:43.122986 sudo[1701]: pam_unix(sudo:session): session closed for user root Oct 30 23:57:43.124415 sshd[1700]: Connection closed by 10.0.0.1 port 44790 Oct 30 23:57:43.124479 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Oct 30 23:57:43.135187 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:44790.service: Deactivated successfully. Oct 30 23:57:43.137846 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 23:57:43.138534 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Oct 30 23:57:43.141024 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:44800.service - OpenSSH per-connection server daemon (10.0.0.1:44800). Oct 30 23:57:43.141717 systemd-logind[1507]: Removed session 6. Oct 30 23:57:43.198940 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 44800 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:57:43.200743 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:57:43.205626 systemd-logind[1507]: New session 7 of user core. Oct 30 23:57:43.225592 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 23:57:43.279927 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 23:57:43.280498 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 23:57:43.572150 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 23:57:43.583785 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 23:57:43.780134 dockerd[1757]: time="2025-10-30T23:57:43.780065989Z" level=info msg="Starting up" Oct 30 23:57:43.780963 dockerd[1757]: time="2025-10-30T23:57:43.780940760Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 23:57:43.791561 dockerd[1757]: time="2025-10-30T23:57:43.791508082Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 23:57:43.824409 dockerd[1757]: time="2025-10-30T23:57:43.824269421Z" level=info msg="Loading containers: start." Oct 30 23:57:43.835411 kernel: Initializing XFRM netlink socket Oct 30 23:57:44.053870 systemd-networkd[1434]: docker0: Link UP Oct 30 23:57:44.057311 dockerd[1757]: time="2025-10-30T23:57:44.057256666Z" level=info msg="Loading containers: done." Oct 30 23:57:44.069529 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck132944876-merged.mount: Deactivated successfully. Oct 30 23:57:44.073152 dockerd[1757]: time="2025-10-30T23:57:44.072788699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 23:57:44.073152 dockerd[1757]: time="2025-10-30T23:57:44.072894922Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 23:57:44.073152 dockerd[1757]: time="2025-10-30T23:57:44.072990314Z" level=info msg="Initializing buildkit" Oct 30 23:57:44.098959 dockerd[1757]: time="2025-10-30T23:57:44.098853223Z" level=info msg="Completed buildkit initialization" Oct 30 23:57:44.103957 dockerd[1757]: time="2025-10-30T23:57:44.103917030Z" level=info msg="Daemon has completed initialization" Oct 30 23:57:44.104180 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 23:57:44.104275 dockerd[1757]: time="2025-10-30T23:57:44.104121854Z" level=info msg="API listen on /run/docker.sock" Oct 30 23:57:44.646043 containerd[1536]: time="2025-10-30T23:57:44.646004874Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 30 23:57:45.318292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731572301.mount: Deactivated successfully. Oct 30 23:57:46.402794 containerd[1536]: time="2025-10-30T23:57:46.402722386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:46.403794 containerd[1536]: time="2025-10-30T23:57:46.403518045Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 30 23:57:46.405079 containerd[1536]: time="2025-10-30T23:57:46.405045223Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:46.408557 containerd[1536]: time="2025-10-30T23:57:46.408514887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:46.410329 containerd[1536]: time="2025-10-30T23:57:46.410293014Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.764246869s" Oct 30 23:57:46.410471 containerd[1536]: time="2025-10-30T23:57:46.410453925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 30 23:57:46.411199 containerd[1536]: time="2025-10-30T23:57:46.411177387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 30 23:57:47.483697 containerd[1536]: time="2025-10-30T23:57:47.483648316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:47.485175 containerd[1536]: time="2025-10-30T23:57:47.485147183Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 30 23:57:47.485872 containerd[1536]: time="2025-10-30T23:57:47.485826042Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:47.489449 containerd[1536]: time="2025-10-30T23:57:47.489143188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:47.490157 containerd[1536]: time="2025-10-30T23:57:47.490107593Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.078901065s" Oct 30 23:57:47.490157 containerd[1536]: time="2025-10-30T23:57:47.490144063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 30 23:57:47.490564 containerd[1536]: time="2025-10-30T23:57:47.490525192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 30 23:57:47.634887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 23:57:47.636419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:47.767874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:47.771827 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:47.819674 kubelet[2046]: E1030 23:57:47.819599 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:47.822287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:47.822447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:47.822758 systemd[1]: kubelet.service: Consumed 144ms CPU time, 108.1M memory peak. Oct 30 23:57:48.791805 containerd[1536]: time="2025-10-30T23:57:48.791054511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:48.792132 containerd[1536]: time="2025-10-30T23:57:48.792100783Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 30 23:57:48.793081 containerd[1536]: time="2025-10-30T23:57:48.793015434Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:48.795506 containerd[1536]: time="2025-10-30T23:57:48.795474111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:48.796419 containerd[1536]: time="2025-10-30T23:57:48.796397177Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.305826222s" Oct 30 23:57:48.796451 containerd[1536]: time="2025-10-30T23:57:48.796426105Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 30 23:57:48.797512 containerd[1536]: time="2025-10-30T23:57:48.797488644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 30 23:57:49.791409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330174865.mount: Deactivated successfully. Oct 30 23:57:50.031667 containerd[1536]: time="2025-10-30T23:57:50.031611753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:50.032557 containerd[1536]: time="2025-10-30T23:57:50.032341809Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 30 23:57:50.033263 containerd[1536]: time="2025-10-30T23:57:50.033235155Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:50.035369 containerd[1536]: time="2025-10-30T23:57:50.035344340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:50.035979 containerd[1536]: time="2025-10-30T23:57:50.035948595Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.238427139s" Oct 30 23:57:50.036033 containerd[1536]: time="2025-10-30T23:57:50.035979274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 30 23:57:50.036526 containerd[1536]: time="2025-10-30T23:57:50.036507271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 30 23:57:50.627766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765561016.mount: Deactivated successfully. Oct 30 23:57:51.561414 containerd[1536]: time="2025-10-30T23:57:51.560772939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:51.561771 containerd[1536]: time="2025-10-30T23:57:51.561654048Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 30 23:57:51.563404 containerd[1536]: time="2025-10-30T23:57:51.563340621Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:51.566441 containerd[1536]: time="2025-10-30T23:57:51.566403338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:51.567517 containerd[1536]: time="2025-10-30T23:57:51.567471257Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.530936511s" Oct 30 23:57:51.567561 containerd[1536]: time="2025-10-30T23:57:51.567514385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 30 23:57:51.568313 containerd[1536]: time="2025-10-30T23:57:51.568129476Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 23:57:52.072062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397663433.mount: Deactivated successfully. Oct 30 23:57:52.085522 containerd[1536]: time="2025-10-30T23:57:52.085470265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:52.086654 containerd[1536]: time="2025-10-30T23:57:52.086610745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 30 23:57:52.088427 containerd[1536]: time="2025-10-30T23:57:52.088393777Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:52.091945 containerd[1536]: time="2025-10-30T23:57:52.091898619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 23:57:52.092511 containerd[1536]: time="2025-10-30T23:57:52.092469660Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 524.311273ms" Oct 30 23:57:52.092511 containerd[1536]: time="2025-10-30T23:57:52.092505695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 30 23:57:52.093061 containerd[1536]: time="2025-10-30T23:57:52.093009870Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 30 23:57:52.890737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985874834.mount: Deactivated successfully. Oct 30 23:57:54.755834 containerd[1536]: time="2025-10-30T23:57:54.755786033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:54.756366 containerd[1536]: time="2025-10-30T23:57:54.756328161Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 30 23:57:54.757503 containerd[1536]: time="2025-10-30T23:57:54.757446482Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:54.761114 containerd[1536]: time="2025-10-30T23:57:54.761068767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:57:54.762831 containerd[1536]: time="2025-10-30T23:57:54.762798908Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.669762173s" Oct 30 23:57:54.762899 containerd[1536]: time="2025-10-30T23:57:54.762837217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 30 23:57:58.072854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 23:57:58.075193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:57:58.253887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:57:58.265729 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 23:57:58.298224 kubelet[2209]: E1030 23:57:58.298167 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 23:57:58.300765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 23:57:58.301020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 23:57:58.302541 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107M memory peak. Oct 30 23:58:01.142572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:01.142707 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107M memory peak. Oct 30 23:58:01.144546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:01.166130 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... Oct 30 23:58:01.166144 systemd[1]: Reloading... Oct 30 23:58:01.241508 zram_generator::config[2270]: No configuration found. Oct 30 23:58:01.424517 systemd[1]: Reloading finished in 258 ms. Oct 30 23:58:01.486998 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 23:58:01.487079 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 23:58:01.489418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:01.489461 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95.1M memory peak. Oct 30 23:58:01.494493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:01.598998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:01.602623 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:58:01.635933 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:01.635933 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:58:01.635933 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:01.636267 kubelet[2312]: I1030 23:58:01.635972 2312 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:58:02.282422 kubelet[2312]: I1030 23:58:02.282300 2312 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:58:02.282422 kubelet[2312]: I1030 23:58:02.282332 2312 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:58:02.282631 kubelet[2312]: I1030 23:58:02.282599 2312 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:58:02.305844 kubelet[2312]: E1030 23:58:02.305809 2312 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:02.306312 kubelet[2312]: I1030 23:58:02.306296 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:58:02.313128 kubelet[2312]: I1030 23:58:02.313104 2312 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 23:58:02.315718 kubelet[2312]: I1030 23:58:02.315684 2312 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:58:02.316387 kubelet[2312]: I1030 23:58:02.316331 2312 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:58:02.316560 kubelet[2312]: I1030 23:58:02.316376 2312 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:58:02.316658 kubelet[2312]: I1030 23:58:02.316633 2312 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:58:02.316658 kubelet[2312]: I1030 23:58:02.316642 2312 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:58:02.316855 kubelet[2312]: I1030 23:58:02.316839 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:02.319259 kubelet[2312]: I1030 23:58:02.319218 2312 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:58:02.319259 kubelet[2312]: I1030 23:58:02.319245 2312 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:58:02.319319 kubelet[2312]: I1030 23:58:02.319268 2312 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:58:02.319319 kubelet[2312]: I1030 23:58:02.319278 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:58:02.322408 kubelet[2312]: I1030 23:58:02.322118 2312 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 23:58:02.322408 kubelet[2312]: W1030 23:58:02.322267 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:02.322408 kubelet[2312]: E1030 23:58:02.322328 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:02.322840 kubelet[2312]: I1030 23:58:02.322793 2312 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:58:02.322934 kubelet[2312]: W1030 23:58:02.322924 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 23:58:02.323097 kubelet[2312]: W1030 23:58:02.323062 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:02.323131 kubelet[2312]: E1030 23:58:02.323108 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:02.323825 kubelet[2312]: I1030 23:58:02.323800 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:58:02.323860 kubelet[2312]: I1030 23:58:02.323850 2312 server.go:1287] "Started kubelet" Oct 30 23:58:02.325404 kubelet[2312]: I1030 23:58:02.323915 2312 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:58:02.325404 kubelet[2312]: I1030 23:58:02.324742 2312 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:58:02.327914 kubelet[2312]: I1030 23:58:02.327855 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:58:02.328249 kubelet[2312]: I1030 23:58:02.328230 2312 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:58:02.328537 kubelet[2312]: E1030 23:58:02.327734 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736a3b56cc115f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 23:58:02.323824991 +0000 UTC m=+0.718474887,LastTimestamp:2025-10-30 23:58:02.323824991 +0000 UTC m=+0.718474887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 23:58:02.328725 kubelet[2312]: I1030 23:58:02.328699 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:58:02.328819 kubelet[2312]: I1030 23:58:02.328683 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:58:02.329476 kubelet[2312]: I1030 23:58:02.329456 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:58:02.329619 kubelet[2312]: I1030 23:58:02.329605 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:58:02.329723 kubelet[2312]: I1030 23:58:02.329712 2312 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:58:02.330170 kubelet[2312]: E1030 23:58:02.330113 2312 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:58:02.330170 kubelet[2312]: W1030 23:58:02.330037 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:02.330170 kubelet[2312]: E1030 23:58:02.330158 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:02.330451 kubelet[2312]: I1030 23:58:02.330404 2312 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:58:02.330504 kubelet[2312]: I1030 23:58:02.330485 2312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:58:02.330983 kubelet[2312]: E1030 23:58:02.330882 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:02.331046 kubelet[2312]: E1030 23:58:02.330973 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Oct 30 23:58:02.331350 kubelet[2312]: I1030 23:58:02.331330 2312 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:58:02.342366 kubelet[2312]: I1030 23:58:02.342344 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:58:02.342366 kubelet[2312]: I1030 23:58:02.342362 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:58:02.342476 kubelet[2312]: I1030 23:58:02.342392 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:02.343823 kubelet[2312]: I1030 23:58:02.343774 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:58:02.345010 kubelet[2312]: I1030 23:58:02.344980 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:58:02.345010 kubelet[2312]: I1030 23:58:02.345009 2312 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:58:02.345098 kubelet[2312]: I1030 23:58:02.345027 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:58:02.345098 kubelet[2312]: I1030 23:58:02.345034 2312 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:58:02.345098 kubelet[2312]: E1030 23:58:02.345068 2312 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:58:02.345806 kubelet[2312]: W1030 23:58:02.345776 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:02.345867 kubelet[2312]: E1030 23:58:02.345822 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:02.415660 kubelet[2312]: I1030 23:58:02.415617 2312 policy_none.go:49] "None policy: Start" Oct 30 23:58:02.415660 kubelet[2312]: I1030 23:58:02.415649 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:58:02.415660 kubelet[2312]: I1030 23:58:02.415663 2312 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:58:02.421030 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 23:58:02.431653 kubelet[2312]: E1030 23:58:02.431610 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:02.439319 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 23:58:02.442365 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 23:58:02.446193 kubelet[2312]: E1030 23:58:02.446151 2312 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 23:58:02.458257 kubelet[2312]: I1030 23:58:02.458218 2312 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:58:02.458490 kubelet[2312]: I1030 23:58:02.458459 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:58:02.458931 kubelet[2312]: I1030 23:58:02.458477 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:58:02.458931 kubelet[2312]: I1030 23:58:02.458835 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:58:02.460328 kubelet[2312]: E1030 23:58:02.460296 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:58:02.460412 kubelet[2312]: E1030 23:58:02.460350 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 23:58:02.532029 kubelet[2312]: E1030 23:58:02.531988 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Oct 30 23:58:02.560107 kubelet[2312]: I1030 23:58:02.560023 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 23:58:02.560534 kubelet[2312]: E1030 23:58:02.560505 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 30 23:58:02.563093 kubelet[2312]: E1030 23:58:02.563005 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736a3b56cc115f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 23:58:02.323824991 +0000 UTC m=+0.718474887,LastTimestamp:2025-10-30 23:58:02.323824991 +0000 UTC m=+0.718474887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 23:58:02.654962 systemd[1]: Created slice kubepods-burstable-podd963db4a0d8a7b3d739fd524e4ba12d9.slice - libcontainer container kubepods-burstable-podd963db4a0d8a7b3d739fd524e4ba12d9.slice. Oct 30 23:58:02.671101 kubelet[2312]: E1030 23:58:02.671081 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:02.674191 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 30 23:58:02.689652 kubelet[2312]: E1030 23:58:02.689627 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:02.691924 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 30 23:58:02.693346 kubelet[2312]: E1030 23:58:02.693328 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:02.731975 kubelet[2312]: I1030 23:58:02.731922 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:02.732135 kubelet[2312]: I1030 23:58:02.731961 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:02.732135 kubelet[2312]: I1030 23:58:02.732096 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:02.732135 kubelet[2312]: I1030 23:58:02.732114 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:02.732311 kubelet[2312]: I1030 23:58:02.732242 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:02.732311 kubelet[2312]: I1030 23:58:02.732279 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:02.732311 kubelet[2312]: I1030 23:58:02.732296 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:02.732440 kubelet[2312]: I1030 23:58:02.732427 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:02.732518 kubelet[2312]: I1030 23:58:02.732507 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:02.761853 kubelet[2312]: I1030 23:58:02.761828 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 23:58:02.762197 kubelet[2312]: E1030 23:58:02.762171 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 30 23:58:02.932673 kubelet[2312]: E1030 23:58:02.932560 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Oct 30 23:58:02.972590 containerd[1536]: time="2025-10-30T23:58:02.972500201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d963db4a0d8a7b3d739fd524e4ba12d9,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:02.991088 containerd[1536]: time="2025-10-30T23:58:02.991039194Z" level=info msg="connecting to shim f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539" address="unix:///run/containerd/s/1ae84b45597ed19aa7ee7049a1a80fa152117e211f52867bc4be2be86c788b8b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:02.991198 containerd[1536]: time="2025-10-30T23:58:02.991104851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:02.995040 containerd[1536]: time="2025-10-30T23:58:02.995002579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:03.015925 containerd[1536]: time="2025-10-30T23:58:03.015781847Z" level=info msg="connecting to shim fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab" address="unix:///run/containerd/s/03d27b3f7448cb1a547bd5571cb18b37381015e66bacc48dd6cf4048950cc373" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:03.021074 containerd[1536]: time="2025-10-30T23:58:03.021035956Z" level=info msg="connecting to shim 2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa" address="unix:///run/containerd/s/571333b2896e558c7943c71166a7304b3b0444b82aa21a758f1b470b3edb7151" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:03.022577 systemd[1]: Started cri-containerd-f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539.scope - libcontainer container f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539. Oct 30 23:58:03.047570 systemd[1]: Started cri-containerd-2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa.scope - libcontainer container 2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa. Oct 30 23:58:03.050341 systemd[1]: Started cri-containerd-fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab.scope - libcontainer container fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab. Oct 30 23:58:03.064047 containerd[1536]: time="2025-10-30T23:58:03.064002397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d963db4a0d8a7b3d739fd524e4ba12d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539\"" Oct 30 23:58:03.067115 containerd[1536]: time="2025-10-30T23:58:03.067070011Z" level=info msg="CreateContainer within sandbox \"f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 23:58:03.076419 containerd[1536]: time="2025-10-30T23:58:03.076127660Z" level=info msg="Container 781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:03.085720 containerd[1536]: time="2025-10-30T23:58:03.085682862Z" level=info msg="CreateContainer within sandbox \"f9e4faf03cd34301bbc31116ca7046ebbd118681268b4fa2313a0d43a6c9c539\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79\"" Oct 30 23:58:03.086629 containerd[1536]: time="2025-10-30T23:58:03.086600630Z" level=info msg="StartContainer for \"781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79\"" Oct 30 23:58:03.088401 containerd[1536]: time="2025-10-30T23:58:03.088065041Z" level=info msg="connecting to shim 781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79" address="unix:///run/containerd/s/1ae84b45597ed19aa7ee7049a1a80fa152117e211f52867bc4be2be86c788b8b" protocol=ttrpc version=3 Oct 30 23:58:03.091185 containerd[1536]: time="2025-10-30T23:58:03.091154700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa\"" Oct 30 23:58:03.092320 containerd[1536]: time="2025-10-30T23:58:03.092293798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab\"" Oct 30 23:58:03.093504 containerd[1536]: time="2025-10-30T23:58:03.093479907Z" level=info msg="CreateContainer within sandbox \"2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 23:58:03.096040 containerd[1536]: time="2025-10-30T23:58:03.095994756Z" level=info msg="CreateContainer within sandbox \"fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 23:58:03.101124 containerd[1536]: time="2025-10-30T23:58:03.101066623Z" level=info msg="Container b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:03.106655 containerd[1536]: time="2025-10-30T23:58:03.106628121Z" level=info msg="Container a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:03.111807 containerd[1536]: time="2025-10-30T23:58:03.111765924Z" level=info msg="CreateContainer within sandbox \"2f47c0e09c514e4aefeda5cd77a6cf2d09ce5f0350a4e75f307633c8c3f871aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec\"" Oct 30 23:58:03.112163 containerd[1536]: time="2025-10-30T23:58:03.112138528Z" level=info msg="StartContainer for \"b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec\"" Oct 30 23:58:03.113054 containerd[1536]: time="2025-10-30T23:58:03.112994322Z" level=info msg="CreateContainer within sandbox \"fd43bc98adc8aeee2f8cc0fbee6c494a90127fc0c12a8d330a542157e5db83ab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143\"" Oct 30 23:58:03.113281 containerd[1536]: time="2025-10-30T23:58:03.113204409Z" level=info msg="connecting to shim b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec" address="unix:///run/containerd/s/571333b2896e558c7943c71166a7304b3b0444b82aa21a758f1b470b3edb7151" protocol=ttrpc version=3 Oct 30 23:58:03.113391 containerd[1536]: time="2025-10-30T23:58:03.113360685Z" level=info msg="StartContainer for \"a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143\"" Oct 30 23:58:03.114541 systemd[1]: Started cri-containerd-781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79.scope - libcontainer container 781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79. Oct 30 23:58:03.114787 containerd[1536]: time="2025-10-30T23:58:03.114597525Z" level=info msg="connecting to shim a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143" address="unix:///run/containerd/s/03d27b3f7448cb1a547bd5571cb18b37381015e66bacc48dd6cf4048950cc373" protocol=ttrpc version=3 Oct 30 23:58:03.134564 systemd[1]: Started cri-containerd-a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143.scope - libcontainer container a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143. Oct 30 23:58:03.138518 systemd[1]: Started cri-containerd-b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec.scope - libcontainer container b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec. Oct 30 23:58:03.164435 kubelet[2312]: I1030 23:58:03.164407 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 23:58:03.164724 kubelet[2312]: E1030 23:58:03.164696 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 30 23:58:03.172402 containerd[1536]: time="2025-10-30T23:58:03.172298740Z" level=info msg="StartContainer for \"781d75eb5a63cb6834b1bac73b31564397e27efa7b871cf3fb1b994c24029e79\" returns successfully" Oct 30 23:58:03.175578 kubelet[2312]: W1030 23:58:03.175472 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:03.175651 kubelet[2312]: E1030 23:58:03.175590 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:03.182378 containerd[1536]: time="2025-10-30T23:58:03.182292321Z" level=info msg="StartContainer for \"a930bbab2eae72974b3bd091666e5aefdaa67c54a25425c492b307f917ffc143\" returns successfully" Oct 30 23:58:03.187911 containerd[1536]: time="2025-10-30T23:58:03.187533827Z" level=info msg="StartContainer for \"b6334c718b020942ddc31db52f14a3e17f8285387c4319f2b5c1df88e106e4ec\" returns successfully" Oct 30 23:58:03.194749 kubelet[2312]: W1030 23:58:03.194698 2312 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Oct 30 23:58:03.194825 kubelet[2312]: E1030 23:58:03.194761 2312 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Oct 30 23:58:03.353741 kubelet[2312]: E1030 23:58:03.353709 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:03.357674 kubelet[2312]: E1030 23:58:03.357654 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:03.361753 kubelet[2312]: E1030 23:58:03.361732 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:03.966592 kubelet[2312]: I1030 23:58:03.966541 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 23:58:04.363005 kubelet[2312]: E1030 23:58:04.362897 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:04.363281 kubelet[2312]: E1030 23:58:04.363247 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:04.380349 kubelet[2312]: E1030 23:58:04.380322 2312 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 23:58:04.510972 kubelet[2312]: E1030 23:58:04.510932 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 23:58:04.587891 kubelet[2312]: I1030 23:58:04.587855 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 23:58:04.587891 kubelet[2312]: E1030 23:58:04.587889 2312 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 23:58:04.599266 kubelet[2312]: E1030 23:58:04.599231 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:04.700451 kubelet[2312]: E1030 23:58:04.700339 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:04.801265 kubelet[2312]: E1030 23:58:04.801220 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:04.902174 kubelet[2312]: E1030 23:58:04.902132 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:05.002825 kubelet[2312]: E1030 23:58:05.002726 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:05.103495 kubelet[2312]: E1030 23:58:05.103434 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:05.232046 kubelet[2312]: I1030 23:58:05.231720 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:05.237564 kubelet[2312]: E1030 23:58:05.237528 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:05.237658 kubelet[2312]: I1030 23:58:05.237647 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:05.239411 kubelet[2312]: E1030 23:58:05.239300 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:05.239411 kubelet[2312]: I1030 23:58:05.239322 2312 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:05.241003 kubelet[2312]: E1030 23:58:05.240961 2312 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:05.323300 kubelet[2312]: I1030 23:58:05.323205 2312 apiserver.go:52] "Watching apiserver" Oct 30 23:58:05.330683 kubelet[2312]: I1030 23:58:05.330659 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 23:58:06.542269 systemd[1]: Reload requested from client PID 2589 ('systemctl') (unit session-7.scope)... Oct 30 23:58:06.542284 systemd[1]: Reloading... Oct 30 23:58:06.611519 zram_generator::config[2629]: No configuration found. Oct 30 23:58:06.781352 systemd[1]: Reloading finished in 238 ms. Oct 30 23:58:06.805148 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:06.816852 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 23:58:06.817103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:06.817154 systemd[1]: kubelet.service: Consumed 1.057s CPU time, 129.5M memory peak. Oct 30 23:58:06.819604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 23:58:06.946278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 23:58:06.959799 (kubelet)[2674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 23:58:07.006425 kubelet[2674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:07.006425 kubelet[2674]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 23:58:07.006425 kubelet[2674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 23:58:07.006425 kubelet[2674]: I1030 23:58:07.005960 2674 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 23:58:07.011410 kubelet[2674]: I1030 23:58:07.011362 2674 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 30 23:58:07.011523 kubelet[2674]: I1030 23:58:07.011513 2674 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 23:58:07.011857 kubelet[2674]: I1030 23:58:07.011837 2674 server.go:954] "Client rotation is on, will bootstrap in background" Oct 30 23:58:07.013178 kubelet[2674]: I1030 23:58:07.013156 2674 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 30 23:58:07.015583 kubelet[2674]: I1030 23:58:07.015547 2674 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 23:58:07.021250 kubelet[2674]: I1030 23:58:07.021225 2674 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 23:58:07.023874 kubelet[2674]: I1030 23:58:07.023851 2674 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 23:58:07.024090 kubelet[2674]: I1030 23:58:07.024063 2674 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 23:58:07.024246 kubelet[2674]: I1030 23:58:07.024090 2674 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 23:58:07.024317 kubelet[2674]: I1030 23:58:07.024257 2674 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 23:58:07.024317 kubelet[2674]: I1030 23:58:07.024266 2674 container_manager_linux.go:304] "Creating device plugin manager" Oct 30 23:58:07.024317 kubelet[2674]: I1030 23:58:07.024306 2674 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:07.024461 kubelet[2674]: I1030 23:58:07.024450 2674 kubelet.go:446] "Attempting to sync node with API server" Oct 30 23:58:07.024488 kubelet[2674]: I1030 23:58:07.024465 2674 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 23:58:07.024488 kubelet[2674]: I1030 23:58:07.024485 2674 kubelet.go:352] "Adding apiserver pod source" Oct 30 23:58:07.024535 kubelet[2674]: I1030 23:58:07.024494 2674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 23:58:07.025417 kubelet[2674]: I1030 23:58:07.025275 2674 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 23:58:07.025765 kubelet[2674]: I1030 23:58:07.025746 2674 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 30 23:58:07.026154 kubelet[2674]: I1030 23:58:07.026133 2674 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 23:58:07.026189 kubelet[2674]: I1030 23:58:07.026166 2674 server.go:1287] "Started kubelet" Oct 30 23:58:07.027871 kubelet[2674]: I1030 23:58:07.026238 2674 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 23:58:07.027871 kubelet[2674]: I1030 23:58:07.026481 2674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 23:58:07.027871 kubelet[2674]: I1030 23:58:07.026720 2674 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 23:58:07.028047 kubelet[2674]: I1030 23:58:07.027686 2674 server.go:479] "Adding debug handlers to kubelet server" Oct 30 23:58:07.029597 kubelet[2674]: I1030 23:58:07.029571 2674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 23:58:07.031720 kubelet[2674]: I1030 23:58:07.031688 2674 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 23:58:07.033306 kubelet[2674]: E1030 23:58:07.033277 2674 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 23:58:07.034711 kubelet[2674]: I1030 23:58:07.034683 2674 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 23:58:07.034914 kubelet[2674]: E1030 23:58:07.034893 2674 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 23:58:07.039673 kubelet[2674]: I1030 23:58:07.039643 2674 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 23:58:07.041124 kubelet[2674]: I1030 23:58:07.039829 2674 reconciler.go:26] "Reconciler: start to sync state" Oct 30 23:58:07.042222 kubelet[2674]: I1030 23:58:07.042196 2674 factory.go:221] Registration of the containerd container factory successfully Oct 30 23:58:07.042222 kubelet[2674]: I1030 23:58:07.042218 2674 factory.go:221] Registration of the systemd container factory successfully Oct 30 23:58:07.042377 kubelet[2674]: I1030 23:58:07.042349 2674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 23:58:07.048512 kubelet[2674]: I1030 23:58:07.048464 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 30 23:58:07.052399 kubelet[2674]: I1030 23:58:07.052308 2674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 30 23:58:07.052399 kubelet[2674]: I1030 23:58:07.052332 2674 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 30 23:58:07.052399 kubelet[2674]: I1030 23:58:07.052349 2674 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 23:58:07.052399 kubelet[2674]: I1030 23:58:07.052356 2674 kubelet.go:2382] "Starting kubelet main sync loop" Oct 30 23:58:07.052736 kubelet[2674]: E1030 23:58:07.052701 2674 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 23:58:07.089350 kubelet[2674]: I1030 23:58:07.089260 2674 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 23:58:07.089350 kubelet[2674]: I1030 23:58:07.089276 2674 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 23:58:07.089350 kubelet[2674]: I1030 23:58:07.089296 2674 state_mem.go:36] "Initialized new in-memory state store" Oct 30 23:58:07.089515 kubelet[2674]: I1030 23:58:07.089483 2674 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 23:58:07.089515 kubelet[2674]: I1030 23:58:07.089494 2674 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 23:58:07.089515 kubelet[2674]: I1030 23:58:07.089512 2674 policy_none.go:49] "None policy: Start" Oct 30 23:58:07.089573 kubelet[2674]: I1030 23:58:07.089520 2674 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 23:58:07.089573 kubelet[2674]: I1030 23:58:07.089529 2674 state_mem.go:35] "Initializing new in-memory state store" Oct 30 23:58:07.090188 kubelet[2674]: I1030 23:58:07.089616 2674 state_mem.go:75] "Updated machine memory state" Oct 30 23:58:07.093697 kubelet[2674]: I1030 23:58:07.093531 2674 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 30 23:58:07.093852 kubelet[2674]: I1030 23:58:07.093838 2674 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 23:58:07.093932 kubelet[2674]: I1030 23:58:07.093903 2674 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 23:58:07.094083 kubelet[2674]: I1030 23:58:07.094072 2674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 23:58:07.096578 kubelet[2674]: E1030 23:58:07.096162 2674 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 23:58:07.154110 kubelet[2674]: I1030 23:58:07.154074 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:07.154944 kubelet[2674]: I1030 23:58:07.154740 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:07.155207 kubelet[2674]: I1030 23:58:07.154792 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:07.196761 kubelet[2674]: I1030 23:58:07.196719 2674 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 23:58:07.203928 kubelet[2674]: I1030 23:58:07.203782 2674 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 23:58:07.204139 kubelet[2674]: I1030 23:58:07.204109 2674 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 23:58:07.241049 kubelet[2674]: I1030 23:58:07.241014 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:07.241179 kubelet[2674]: I1030 23:58:07.241069 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:07.241179 kubelet[2674]: I1030 23:58:07.241095 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:07.241179 kubelet[2674]: I1030 23:58:07.241158 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:07.241242 kubelet[2674]: I1030 23:58:07.241180 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:07.241242 kubelet[2674]: I1030 23:58:07.241228 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:07.241322 kubelet[2674]: I1030 23:58:07.241243 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d963db4a0d8a7b3d739fd524e4ba12d9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d963db4a0d8a7b3d739fd524e4ba12d9\") " pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:07.241322 kubelet[2674]: I1030 23:58:07.241257 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:07.241322 kubelet[2674]: I1030 23:58:07.241271 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 23:58:08.025723 kubelet[2674]: I1030 23:58:08.025433 2674 apiserver.go:52] "Watching apiserver" Oct 30 23:58:08.039997 kubelet[2674]: I1030 23:58:08.039941 2674 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 23:58:08.069921 kubelet[2674]: I1030 23:58:08.068946 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:08.069921 kubelet[2674]: I1030 23:58:08.068948 2674 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:08.076761 kubelet[2674]: E1030 23:58:08.076486 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 23:58:08.076761 kubelet[2674]: E1030 23:58:08.076496 2674 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 23:58:08.103485 kubelet[2674]: I1030 23:58:08.102558 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.102537287 podStartE2EDuration="1.102537287s" podCreationTimestamp="2025-10-30 23:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:08.091823951 +0000 UTC m=+1.128372647" watchObservedRunningTime="2025-10-30 23:58:08.102537287 +0000 UTC m=+1.139086183" Oct 30 23:58:08.112154 kubelet[2674]: I1030 23:58:08.112003 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.111978764 podStartE2EDuration="1.111978764s" podCreationTimestamp="2025-10-30 23:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:08.10367398 +0000 UTC m=+1.140222756" watchObservedRunningTime="2025-10-30 23:58:08.111978764 +0000 UTC m=+1.148527500" Oct 30 23:58:08.112553 kubelet[2674]: I1030 23:58:08.112363 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.112356702 podStartE2EDuration="1.112356702s" podCreationTimestamp="2025-10-30 23:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:08.112214655 +0000 UTC m=+1.148763391" watchObservedRunningTime="2025-10-30 23:58:08.112356702 +0000 UTC m=+1.148905438" Oct 30 23:58:11.818752 kubelet[2674]: I1030 23:58:11.818722 2674 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 23:58:11.819511 containerd[1536]: time="2025-10-30T23:58:11.819462136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 23:58:11.819757 kubelet[2674]: I1030 23:58:11.819651 2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 23:58:12.779932 systemd[1]: Created slice kubepods-besteffort-pod1a343bbc_4950_449b_bdd7_dffe1b4c4243.slice - libcontainer container kubepods-besteffort-pod1a343bbc_4950_449b_bdd7_dffe1b4c4243.slice. Oct 30 23:58:12.875415 kubelet[2674]: I1030 23:58:12.874545 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf7gt\" (UniqueName: \"kubernetes.io/projected/1a343bbc-4950-449b-bdd7-dffe1b4c4243-kube-api-access-zf7gt\") pod \"kube-proxy-rgbbl\" (UID: \"1a343bbc-4950-449b-bdd7-dffe1b4c4243\") " pod="kube-system/kube-proxy-rgbbl" Oct 30 23:58:12.875415 kubelet[2674]: I1030 23:58:12.874717 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a343bbc-4950-449b-bdd7-dffe1b4c4243-kube-proxy\") pod \"kube-proxy-rgbbl\" (UID: \"1a343bbc-4950-449b-bdd7-dffe1b4c4243\") " pod="kube-system/kube-proxy-rgbbl" Oct 30 23:58:12.875415 kubelet[2674]: I1030 23:58:12.874801 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a343bbc-4950-449b-bdd7-dffe1b4c4243-xtables-lock\") pod \"kube-proxy-rgbbl\" (UID: \"1a343bbc-4950-449b-bdd7-dffe1b4c4243\") " pod="kube-system/kube-proxy-rgbbl" Oct 30 23:58:12.875415 kubelet[2674]: I1030 23:58:12.874827 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a343bbc-4950-449b-bdd7-dffe1b4c4243-lib-modules\") pod \"kube-proxy-rgbbl\" (UID: \"1a343bbc-4950-449b-bdd7-dffe1b4c4243\") " pod="kube-system/kube-proxy-rgbbl" Oct 30 23:58:12.905530 systemd[1]: Created slice kubepods-besteffort-poddc4bab31_e35e_49f3_b495_e04f83a8b136.slice - libcontainer container kubepods-besteffort-poddc4bab31_e35e_49f3_b495_e04f83a8b136.slice. Oct 30 23:58:12.975444 kubelet[2674]: I1030 23:58:12.975397 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8657b\" (UniqueName: \"kubernetes.io/projected/dc4bab31-e35e-49f3-b495-e04f83a8b136-kube-api-access-8657b\") pod \"tigera-operator-7dcd859c48-lm2fz\" (UID: \"dc4bab31-e35e-49f3-b495-e04f83a8b136\") " pod="tigera-operator/tigera-operator-7dcd859c48-lm2fz" Oct 30 23:58:12.975580 kubelet[2674]: I1030 23:58:12.975467 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc4bab31-e35e-49f3-b495-e04f83a8b136-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lm2fz\" (UID: \"dc4bab31-e35e-49f3-b495-e04f83a8b136\") " pod="tigera-operator/tigera-operator-7dcd859c48-lm2fz" Oct 30 23:58:13.097738 containerd[1536]: time="2025-10-30T23:58:13.097619380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgbbl,Uid:1a343bbc-4950-449b-bdd7-dffe1b4c4243,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:13.119013 containerd[1536]: time="2025-10-30T23:58:13.118964326Z" level=info msg="connecting to shim a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c" address="unix:///run/containerd/s/8344c4a717f58a4fad0c870f4b45acad116cc8134b313b155c792e0a1db11c5a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:13.142596 systemd[1]: Started cri-containerd-a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c.scope - libcontainer container a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c. Oct 30 23:58:13.171327 containerd[1536]: time="2025-10-30T23:58:13.171265954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgbbl,Uid:1a343bbc-4950-449b-bdd7-dffe1b4c4243,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c\"" Oct 30 23:58:13.175067 containerd[1536]: time="2025-10-30T23:58:13.174992044Z" level=info msg="CreateContainer within sandbox \"a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 23:58:13.212931 containerd[1536]: time="2025-10-30T23:58:13.212887888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lm2fz,Uid:dc4bab31-e35e-49f3-b495-e04f83a8b136,Namespace:tigera-operator,Attempt:0,}" Oct 30 23:58:13.228063 containerd[1536]: time="2025-10-30T23:58:13.227087225Z" level=info msg="Container 0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:13.237736 containerd[1536]: time="2025-10-30T23:58:13.237643074Z" level=info msg="CreateContainer within sandbox \"a8b4e3b3310022cd2ade18249f487e0dffab653a235edcd29a4f1c17c543e71c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b\"" Oct 30 23:58:13.238538 containerd[1536]: time="2025-10-30T23:58:13.238319257Z" level=info msg="StartContainer for \"0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b\"" Oct 30 23:58:13.240533 containerd[1536]: time="2025-10-30T23:58:13.240414010Z" level=info msg="connecting to shim 0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b" address="unix:///run/containerd/s/8344c4a717f58a4fad0c870f4b45acad116cc8134b313b155c792e0a1db11c5a" protocol=ttrpc version=3 Oct 30 23:58:13.243950 containerd[1536]: time="2025-10-30T23:58:13.243898092Z" level=info msg="connecting to shim b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c" address="unix:///run/containerd/s/14daad5445bc401467db05f83fb552821a43b83d1418fc50ce4a34c65e89b311" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:13.262655 systemd[1]: Started cri-containerd-0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b.scope - libcontainer container 0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b. Oct 30 23:58:13.277617 systemd[1]: Started cri-containerd-b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c.scope - libcontainer container b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c. Oct 30 23:58:13.300943 containerd[1536]: time="2025-10-30T23:58:13.300900684Z" level=info msg="StartContainer for \"0e3b16e4296cf909beee74c63d00f9bed6c4298f7ec2a5a903a9824d19e3938b\" returns successfully" Oct 30 23:58:13.323474 containerd[1536]: time="2025-10-30T23:58:13.323431152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lm2fz,Uid:dc4bab31-e35e-49f3-b495-e04f83a8b136,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c\"" Oct 30 23:58:13.326467 containerd[1536]: time="2025-10-30T23:58:13.325987401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 23:58:14.096852 kubelet[2674]: I1030 23:58:14.096759 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rgbbl" podStartSLOduration=2.0967410810000002 podStartE2EDuration="2.096741081s" podCreationTimestamp="2025-10-30 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:14.094144635 +0000 UTC m=+7.130693371" watchObservedRunningTime="2025-10-30 23:58:14.096741081 +0000 UTC m=+7.133289817" Oct 30 23:58:14.791043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620486638.mount: Deactivated successfully. Oct 30 23:58:15.425481 containerd[1536]: time="2025-10-30T23:58:15.425430222Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:15.426509 containerd[1536]: time="2025-10-30T23:58:15.426479455Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 30 23:58:15.427722 containerd[1536]: time="2025-10-30T23:58:15.427678572Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:15.429595 containerd[1536]: time="2025-10-30T23:58:15.429562551Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:15.430205 containerd[1536]: time="2025-10-30T23:58:15.430170730Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.104129847s" Oct 30 23:58:15.430205 containerd[1536]: time="2025-10-30T23:58:15.430201971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 30 23:58:15.432352 containerd[1536]: time="2025-10-30T23:58:15.432316438Z" level=info msg="CreateContainer within sandbox \"b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 23:58:15.437114 containerd[1536]: time="2025-10-30T23:58:15.437073787Z" level=info msg="Container ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:15.442853 containerd[1536]: time="2025-10-30T23:58:15.442797086Z" level=info msg="CreateContainer within sandbox \"b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\"" Oct 30 23:58:15.444355 containerd[1536]: time="2025-10-30T23:58:15.443200459Z" level=info msg="StartContainer for \"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\"" Oct 30 23:58:15.445091 containerd[1536]: time="2025-10-30T23:58:15.445034196Z" level=info msg="connecting to shim ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10" address="unix:///run/containerd/s/14daad5445bc401467db05f83fb552821a43b83d1418fc50ce4a34c65e89b311" protocol=ttrpc version=3 Oct 30 23:58:15.464550 systemd[1]: Started cri-containerd-ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10.scope - libcontainer container ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10. Oct 30 23:58:15.487950 containerd[1536]: time="2025-10-30T23:58:15.487852458Z" level=info msg="StartContainer for \"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\" returns successfully" Oct 30 23:58:16.096257 kubelet[2674]: I1030 23:58:16.096195 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lm2fz" podStartSLOduration=1.9908207610000002 podStartE2EDuration="4.096178889s" podCreationTimestamp="2025-10-30 23:58:12 +0000 UTC" firstStartedPulling="2025-10-30 23:58:13.325566226 +0000 UTC m=+6.362114962" lastFinishedPulling="2025-10-30 23:58:15.430924394 +0000 UTC m=+8.467473090" observedRunningTime="2025-10-30 23:58:16.096086326 +0000 UTC m=+9.132635062" watchObservedRunningTime="2025-10-30 23:58:16.096178889 +0000 UTC m=+9.132727625" Oct 30 23:58:17.534891 systemd[1]: cri-containerd-ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10.scope: Deactivated successfully. Oct 30 23:58:17.564134 containerd[1536]: time="2025-10-30T23:58:17.564087546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\" id:\"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\" pid:2995 exit_status:1 exited_at:{seconds:1761868697 nanos:563657334}" Oct 30 23:58:17.570376 containerd[1536]: time="2025-10-30T23:58:17.570315682Z" level=info msg="received exit event container_id:\"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\" id:\"ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10\" pid:2995 exit_status:1 exited_at:{seconds:1761868697 nanos:563657334}" Oct 30 23:58:17.628927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10-rootfs.mount: Deactivated successfully. Oct 30 23:58:18.097378 kubelet[2674]: I1030 23:58:18.097339 2674 scope.go:117] "RemoveContainer" containerID="ca03dd30f074b87caf3e828e04acf0dfc8e2b8faacd91f5f4e438157cf673e10" Oct 30 23:58:18.108004 containerd[1536]: time="2025-10-30T23:58:18.107944872Z" level=info msg="CreateContainer within sandbox \"b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 30 23:58:18.115568 containerd[1536]: time="2025-10-30T23:58:18.115529115Z" level=info msg="Container 8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:18.121975 containerd[1536]: time="2025-10-30T23:58:18.121921526Z" level=info msg="CreateContainer within sandbox \"b5c7520ad8394976f2916438e49a861a4c4618b0862f7f7981a1e8438aa7a56c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb\"" Oct 30 23:58:18.123075 containerd[1536]: time="2025-10-30T23:58:18.123040756Z" level=info msg="StartContainer for \"8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb\"" Oct 30 23:58:18.124478 containerd[1536]: time="2025-10-30T23:58:18.124444713Z" level=info msg="connecting to shim 8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb" address="unix:///run/containerd/s/14daad5445bc401467db05f83fb552821a43b83d1418fc50ce4a34c65e89b311" protocol=ttrpc version=3 Oct 30 23:58:18.147566 systemd[1]: Started cri-containerd-8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb.scope - libcontainer container 8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb. Oct 30 23:58:18.174872 containerd[1536]: time="2025-10-30T23:58:18.174836580Z" level=info msg="StartContainer for \"8e2242494a1eee78db26a166c0be56457e0091f65c4b80bf85b9ffe58fe7f7cb\" returns successfully" Oct 30 23:58:20.486818 update_engine[1514]: I20251030 23:58:20.486739 1514 update_attempter.cc:509] Updating boot flags... Oct 30 23:58:20.949007 sudo[1737]: pam_unix(sudo:session): session closed for user root Oct 30 23:58:20.951434 sshd[1736]: Connection closed by 10.0.0.1 port 44800 Oct 30 23:58:20.952686 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:20.958971 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:44800.service: Deactivated successfully. Oct 30 23:58:20.961003 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 23:58:20.961333 systemd[1]: session-7.scope: Consumed 8.190s CPU time, 217.5M memory peak. Oct 30 23:58:20.963849 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Oct 30 23:58:20.964982 systemd-logind[1507]: Removed session 7. Oct 30 23:58:29.464530 systemd[1]: Created slice kubepods-besteffort-pod55a85671_ca16_49a4_b3d5_9545d5d0a30a.slice - libcontainer container kubepods-besteffort-pod55a85671_ca16_49a4_b3d5_9545d5d0a30a.slice. Oct 30 23:58:29.480471 kubelet[2674]: I1030 23:58:29.478642 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a85671-ca16-49a4-b3d5-9545d5d0a30a-tigera-ca-bundle\") pod \"calico-typha-8578b9fd8f-whm5d\" (UID: \"55a85671-ca16-49a4-b3d5-9545d5d0a30a\") " pod="calico-system/calico-typha-8578b9fd8f-whm5d" Oct 30 23:58:29.480471 kubelet[2674]: I1030 23:58:29.478691 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc77q\" (UniqueName: \"kubernetes.io/projected/55a85671-ca16-49a4-b3d5-9545d5d0a30a-kube-api-access-vc77q\") pod \"calico-typha-8578b9fd8f-whm5d\" (UID: \"55a85671-ca16-49a4-b3d5-9545d5d0a30a\") " pod="calico-system/calico-typha-8578b9fd8f-whm5d" Oct 30 23:58:29.480471 kubelet[2674]: I1030 23:58:29.478709 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55a85671-ca16-49a4-b3d5-9545d5d0a30a-typha-certs\") pod \"calico-typha-8578b9fd8f-whm5d\" (UID: \"55a85671-ca16-49a4-b3d5-9545d5d0a30a\") " pod="calico-system/calico-typha-8578b9fd8f-whm5d" Oct 30 23:58:29.643655 systemd[1]: Created slice kubepods-besteffort-podbd06df20_2450_45b9_be8e_0d524f3dee94.slice - libcontainer container kubepods-besteffort-podbd06df20_2450_45b9_be8e_0d524f3dee94.slice. Oct 30 23:58:29.680589 kubelet[2674]: I1030 23:58:29.680524 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-cni-net-dir\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.680758 kubelet[2674]: I1030 23:58:29.680744 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-policysync\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.680869 kubelet[2674]: I1030 23:58:29.680854 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-cni-log-dir\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.680942 kubelet[2674]: I1030 23:58:29.680931 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-lib-modules\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681012 kubelet[2674]: I1030 23:58:29.680999 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd06df20-2450-45b9-be8e-0d524f3dee94-tigera-ca-bundle\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681078 kubelet[2674]: I1030 23:58:29.681066 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-cni-bin-dir\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681144 kubelet[2674]: I1030 23:58:29.681128 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-flexvol-driver-host\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681295 kubelet[2674]: I1030 23:58:29.681198 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-var-lib-calico\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681295 kubelet[2674]: I1030 23:58:29.681219 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-var-run-calico\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681295 kubelet[2674]: I1030 23:58:29.681236 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd06df20-2450-45b9-be8e-0d524f3dee94-node-certs\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681295 kubelet[2674]: I1030 23:58:29.681253 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd06df20-2450-45b9-be8e-0d524f3dee94-xtables-lock\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.681458 kubelet[2674]: I1030 23:58:29.681283 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6qj7\" (UniqueName: \"kubernetes.io/projected/bd06df20-2450-45b9-be8e-0d524f3dee94-kube-api-access-x6qj7\") pod \"calico-node-l9pp2\" (UID: \"bd06df20-2450-45b9-be8e-0d524f3dee94\") " pod="calico-system/calico-node-l9pp2" Oct 30 23:58:29.775559 containerd[1536]: time="2025-10-30T23:58:29.775450638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8578b9fd8f-whm5d,Uid:55a85671-ca16-49a4-b3d5-9545d5d0a30a,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:29.796874 kubelet[2674]: E1030 23:58:29.795802 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.796874 kubelet[2674]: W1030 23:58:29.795825 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.796874 kubelet[2674]: E1030 23:58:29.796339 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.796874 kubelet[2674]: W1030 23:58:29.796352 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.796874 kubelet[2674]: E1030 23:58:29.796689 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.796874 kubelet[2674]: W1030 23:58:29.796702 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.797289 kubelet[2674]: E1030 23:58:29.797264 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.799208 kubelet[2674]: E1030 23:58:29.799183 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.799276 kubelet[2674]: E1030 23:58:29.799226 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.808870 containerd[1536]: time="2025-10-30T23:58:29.808827684Z" level=info msg="connecting to shim b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa" address="unix:///run/containerd/s/fee94b0f5060912979c41f6806c71e0560961224a30836ef360200f19d6b8918" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:29.840563 systemd[1]: Started cri-containerd-b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa.scope - libcontainer container b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa. Oct 30 23:58:29.875035 kubelet[2674]: E1030 23:58:29.874962 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:29.900500 containerd[1536]: time="2025-10-30T23:58:29.900446648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8578b9fd8f-whm5d,Uid:55a85671-ca16-49a4-b3d5-9545d5d0a30a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa\"" Oct 30 23:58:29.902155 containerd[1536]: time="2025-10-30T23:58:29.902093794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 23:58:29.947829 containerd[1536]: time="2025-10-30T23:58:29.947790394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9pp2,Uid:bd06df20-2450-45b9-be8e-0d524f3dee94,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:29.973795 kubelet[2674]: E1030 23:58:29.973760 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.973795 kubelet[2674]: W1030 23:58:29.973785 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.973940 kubelet[2674]: E1030 23:58:29.973808 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.974416 kubelet[2674]: E1030 23:58:29.974225 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.974416 kubelet[2674]: W1030 23:58:29.974241 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.974416 kubelet[2674]: E1030 23:58:29.974420 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.975641 kubelet[2674]: E1030 23:58:29.974680 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.975641 kubelet[2674]: W1030 23:58:29.974691 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.975641 kubelet[2674]: E1030 23:58:29.974701 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.975641 kubelet[2674]: E1030 23:58:29.974915 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.975641 kubelet[2674]: W1030 23:58:29.974925 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.975641 kubelet[2674]: E1030 23:58:29.974935 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.976573 kubelet[2674]: E1030 23:58:29.976557 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.976682 kubelet[2674]: W1030 23:58:29.976636 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.976682 kubelet[2674]: E1030 23:58:29.976654 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.977120 kubelet[2674]: E1030 23:58:29.977107 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.977862 kubelet[2674]: W1030 23:58:29.977284 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.978121 kubelet[2674]: E1030 23:58:29.978055 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.978689 kubelet[2674]: E1030 23:58:29.978673 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.978855 kubelet[2674]: W1030 23:58:29.978795 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.978855 kubelet[2674]: E1030 23:58:29.978814 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.979659 kubelet[2674]: E1030 23:58:29.979581 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.979659 kubelet[2674]: W1030 23:58:29.979595 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.979659 kubelet[2674]: E1030 23:58:29.979608 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.980717 kubelet[2674]: E1030 23:58:29.980700 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.981631 kubelet[2674]: W1030 23:58:29.981422 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.981631 kubelet[2674]: E1030 23:58:29.981449 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.982146 kubelet[2674]: E1030 23:58:29.982016 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.982146 kubelet[2674]: W1030 23:58:29.982050 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.982146 kubelet[2674]: E1030 23:58:29.982062 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.982355 kubelet[2674]: E1030 23:58:29.982332 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.982499 kubelet[2674]: W1030 23:58:29.982438 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.982499 kubelet[2674]: E1030 23:58:29.982455 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.982853 kubelet[2674]: E1030 23:58:29.982777 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.982853 kubelet[2674]: W1030 23:58:29.982789 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.982853 kubelet[2674]: E1030 23:58:29.982799 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.983680 kubelet[2674]: E1030 23:58:29.983529 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.983680 kubelet[2674]: W1030 23:58:29.983547 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.983680 kubelet[2674]: E1030 23:58:29.983561 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.984556 kubelet[2674]: E1030 23:58:29.984539 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.984691 kubelet[2674]: W1030 23:58:29.984631 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.984691 kubelet[2674]: E1030 23:58:29.984650 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.984945 kubelet[2674]: E1030 23:58:29.984932 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.985049 kubelet[2674]: W1030 23:58:29.984999 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.985049 kubelet[2674]: E1030 23:58:29.985014 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.985501 kubelet[2674]: E1030 23:58:29.985364 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.985597 kubelet[2674]: W1030 23:58:29.985580 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.985653 kubelet[2674]: E1030 23:58:29.985643 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.986222 kubelet[2674]: E1030 23:58:29.986152 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.986222 kubelet[2674]: W1030 23:58:29.986172 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.986222 kubelet[2674]: E1030 23:58:29.986184 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.987394 kubelet[2674]: E1030 23:58:29.987315 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.987394 kubelet[2674]: W1030 23:58:29.987341 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.987394 kubelet[2674]: E1030 23:58:29.987358 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.987895 kubelet[2674]: E1030 23:58:29.987847 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.987980 kubelet[2674]: W1030 23:58:29.987968 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.988030 kubelet[2674]: E1030 23:58:29.988020 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.988521 kubelet[2674]: E1030 23:58:29.988506 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.988973 kubelet[2674]: W1030 23:58:29.988595 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.988973 kubelet[2674]: E1030 23:58:29.988616 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.989471 kubelet[2674]: E1030 23:58:29.989455 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.989726 kubelet[2674]: W1030 23:58:29.989557 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.989726 kubelet[2674]: E1030 23:58:29.989575 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.989726 kubelet[2674]: I1030 23:58:29.989610 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszrk\" (UniqueName: \"kubernetes.io/projected/7aeb131b-e092-42f8-a46c-9c20bc9f295e-kube-api-access-lszrk\") pod \"csi-node-driver-kc6vc\" (UID: \"7aeb131b-e092-42f8-a46c-9c20bc9f295e\") " pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:29.990151 kubelet[2674]: E1030 23:58:29.990136 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.990230 kubelet[2674]: W1030 23:58:29.990218 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.990298 kubelet[2674]: E1030 23:58:29.990288 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.990402 kubelet[2674]: I1030 23:58:29.990365 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7aeb131b-e092-42f8-a46c-9c20bc9f295e-kubelet-dir\") pod \"csi-node-driver-kc6vc\" (UID: \"7aeb131b-e092-42f8-a46c-9c20bc9f295e\") " pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:29.990710 containerd[1536]: time="2025-10-30T23:58:29.990670830Z" level=info msg="connecting to shim 12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9" address="unix:///run/containerd/s/8894074ac5bb9cb798adf84f0739aad9e331bef9fbdeb467c33d3d1b81012e75" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:29.991113 kubelet[2674]: E1030 23:58:29.991032 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.991113 kubelet[2674]: W1030 23:58:29.991049 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.991113 kubelet[2674]: E1030 23:58:29.991068 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.992526 kubelet[2674]: E1030 23:58:29.992417 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.992526 kubelet[2674]: W1030 23:58:29.992435 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.992526 kubelet[2674]: E1030 23:58:29.992455 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.993594 kubelet[2674]: E1030 23:58:29.993575 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.993594 kubelet[2674]: W1030 23:58:29.993593 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.993858 kubelet[2674]: E1030 23:58:29.993645 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.993858 kubelet[2674]: I1030 23:58:29.993679 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7aeb131b-e092-42f8-a46c-9c20bc9f295e-registration-dir\") pod \"csi-node-driver-kc6vc\" (UID: \"7aeb131b-e092-42f8-a46c-9c20bc9f295e\") " pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:29.995101 kubelet[2674]: E1030 23:58:29.995079 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.995101 kubelet[2674]: W1030 23:58:29.995097 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.995477 kubelet[2674]: E1030 23:58:29.995447 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.995532 kubelet[2674]: I1030 23:58:29.995491 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7aeb131b-e092-42f8-a46c-9c20bc9f295e-varrun\") pod \"csi-node-driver-kc6vc\" (UID: \"7aeb131b-e092-42f8-a46c-9c20bc9f295e\") " pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:29.996550 kubelet[2674]: E1030 23:58:29.996512 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.996550 kubelet[2674]: W1030 23:58:29.996529 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.996704 kubelet[2674]: E1030 23:58:29.996597 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.996704 kubelet[2674]: E1030 23:58:29.996681 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.996704 kubelet[2674]: W1030 23:58:29.996688 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.996891 kubelet[2674]: E1030 23:58:29.996732 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.996891 kubelet[2674]: E1030 23:58:29.996826 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.996891 kubelet[2674]: W1030 23:58:29.996835 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.997124 kubelet[2674]: E1030 23:58:29.996971 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.997124 kubelet[2674]: I1030 23:58:29.997012 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7aeb131b-e092-42f8-a46c-9c20bc9f295e-socket-dir\") pod \"csi-node-driver-kc6vc\" (UID: \"7aeb131b-e092-42f8-a46c-9c20bc9f295e\") " pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:29.997124 kubelet[2674]: E1030 23:58:29.996979 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.997124 kubelet[2674]: W1030 23:58:29.997030 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.997124 kubelet[2674]: E1030 23:58:29.997051 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.997454 kubelet[2674]: E1030 23:58:29.997437 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.997581 kubelet[2674]: W1030 23:58:29.997548 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.997581 kubelet[2674]: E1030 23:58:29.997567 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.998408 kubelet[2674]: E1030 23:58:29.998189 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.998565 kubelet[2674]: W1030 23:58:29.998485 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.998565 kubelet[2674]: E1030 23:58:29.998520 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:29.999687 kubelet[2674]: E1030 23:58:29.999600 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:29.999687 kubelet[2674]: W1030 23:58:29.999616 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:29.999687 kubelet[2674]: E1030 23:58:29.999628 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.000515 kubelet[2674]: E1030 23:58:30.000498 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.001033 kubelet[2674]: W1030 23:58:30.000865 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.001033 kubelet[2674]: E1030 23:58:30.000892 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.001151 kubelet[2674]: E1030 23:58:30.001082 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.001151 kubelet[2674]: W1030 23:58:30.001098 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.001151 kubelet[2674]: E1030 23:58:30.001111 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.027608 systemd[1]: Started cri-containerd-12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9.scope - libcontainer container 12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9. Oct 30 23:58:30.054977 containerd[1536]: time="2025-10-30T23:58:30.054934806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9pp2,Uid:bd06df20-2450-45b9-be8e-0d524f3dee94,Namespace:calico-system,Attempt:0,} returns sandbox id \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\"" Oct 30 23:58:30.099241 kubelet[2674]: E1030 23:58:30.099209 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.099241 kubelet[2674]: W1030 23:58:30.099235 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.099470 kubelet[2674]: E1030 23:58:30.099255 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.099545 kubelet[2674]: E1030 23:58:30.099526 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.099545 kubelet[2674]: W1030 23:58:30.099541 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.099599 kubelet[2674]: E1030 23:58:30.099556 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.099782 kubelet[2674]: E1030 23:58:30.099751 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.099782 kubelet[2674]: W1030 23:58:30.099765 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.099842 kubelet[2674]: E1030 23:58:30.099787 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.099999 kubelet[2674]: E1030 23:58:30.099987 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.100030 kubelet[2674]: W1030 23:58:30.099999 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.100030 kubelet[2674]: E1030 23:58:30.100013 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.100172 kubelet[2674]: E1030 23:58:30.100146 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.100172 kubelet[2674]: W1030 23:58:30.100162 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.100172 kubelet[2674]: E1030 23:58:30.100175 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.100329 kubelet[2674]: E1030 23:58:30.100315 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.100329 kubelet[2674]: W1030 23:58:30.100325 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.100569 kubelet[2674]: E1030 23:58:30.100343 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.100669 kubelet[2674]: E1030 23:58:30.100654 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.100737 kubelet[2674]: W1030 23:58:30.100724 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.100809 kubelet[2674]: E1030 23:58:30.100797 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101035 kubelet[2674]: E1030 23:58:30.100994 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101035 kubelet[2674]: W1030 23:58:30.101010 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101035 kubelet[2674]: E1030 23:58:30.101032 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101228 kubelet[2674]: E1030 23:58:30.101211 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101228 kubelet[2674]: W1030 23:58:30.101222 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101298 kubelet[2674]: E1030 23:58:30.101248 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101434 kubelet[2674]: E1030 23:58:30.101369 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101434 kubelet[2674]: W1030 23:58:30.101391 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101434 kubelet[2674]: E1030 23:58:30.101414 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101629 kubelet[2674]: E1030 23:58:30.101576 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101629 kubelet[2674]: W1030 23:58:30.101588 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101629 kubelet[2674]: E1030 23:58:30.101600 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101916 kubelet[2674]: E1030 23:58:30.101746 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101916 kubelet[2674]: W1030 23:58:30.101762 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101916 kubelet[2674]: E1030 23:58:30.101773 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.101916 kubelet[2674]: E1030 23:58:30.101903 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.101916 kubelet[2674]: W1030 23:58:30.101910 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.101916 kubelet[2674]: E1030 23:58:30.101923 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.102119 kubelet[2674]: E1030 23:58:30.102088 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.102119 kubelet[2674]: W1030 23:58:30.102097 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.102119 kubelet[2674]: E1030 23:58:30.102112 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.102261 kubelet[2674]: E1030 23:58:30.102249 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.102293 kubelet[2674]: W1030 23:58:30.102262 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.102293 kubelet[2674]: E1030 23:58:30.102281 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.102496 kubelet[2674]: E1030 23:58:30.102484 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.102496 kubelet[2674]: W1030 23:58:30.102496 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.102561 kubelet[2674]: E1030 23:58:30.102509 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.102653 kubelet[2674]: E1030 23:58:30.102643 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.102683 kubelet[2674]: W1030 23:58:30.102653 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.102683 kubelet[2674]: E1030 23:58:30.102665 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.102842 kubelet[2674]: E1030 23:58:30.102831 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.102877 kubelet[2674]: W1030 23:58:30.102843 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.102940 kubelet[2674]: E1030 23:58:30.102921 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103007 kubelet[2674]: E1030 23:58:30.102988 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103007 kubelet[2674]: W1030 23:58:30.103001 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103063 kubelet[2674]: E1030 23:58:30.103013 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103164 kubelet[2674]: E1030 23:58:30.103152 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103164 kubelet[2674]: W1030 23:58:30.103164 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103234 kubelet[2674]: E1030 23:58:30.103192 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103334 kubelet[2674]: E1030 23:58:30.103322 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103374 kubelet[2674]: W1030 23:58:30.103333 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103374 kubelet[2674]: E1030 23:58:30.103357 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103554 kubelet[2674]: E1030 23:58:30.103543 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103585 kubelet[2674]: W1030 23:58:30.103554 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103585 kubelet[2674]: E1030 23:58:30.103567 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103719 kubelet[2674]: E1030 23:58:30.103708 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103746 kubelet[2674]: W1030 23:58:30.103721 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103746 kubelet[2674]: E1030 23:58:30.103729 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.103874 kubelet[2674]: E1030 23:58:30.103856 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.103874 kubelet[2674]: W1030 23:58:30.103867 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.103874 kubelet[2674]: E1030 23:58:30.103874 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.104045 kubelet[2674]: E1030 23:58:30.104016 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.104045 kubelet[2674]: W1030 23:58:30.104040 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.104104 kubelet[2674]: E1030 23:58:30.104048 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.114678 kubelet[2674]: E1030 23:58:30.114602 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:30.114678 kubelet[2674]: W1030 23:58:30.114625 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:30.114678 kubelet[2674]: E1030 23:58:30.114643 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:30.856203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080579861.mount: Deactivated successfully. Oct 30 23:58:31.053150 kubelet[2674]: E1030 23:58:31.053061 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:31.353343 containerd[1536]: time="2025-10-30T23:58:31.352706656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:31.353795 containerd[1536]: time="2025-10-30T23:58:31.353760631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 30 23:58:31.357712 containerd[1536]: time="2025-10-30T23:58:31.356959437Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:31.361116 containerd[1536]: time="2025-10-30T23:58:31.361085337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:31.361606 containerd[1536]: time="2025-10-30T23:58:31.361485903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.459117545s" Oct 30 23:58:31.361606 containerd[1536]: time="2025-10-30T23:58:31.361520343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 30 23:58:31.362321 containerd[1536]: time="2025-10-30T23:58:31.362282874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 23:58:31.380680 containerd[1536]: time="2025-10-30T23:58:31.380641460Z" level=info msg="CreateContainer within sandbox \"b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 23:58:31.390427 containerd[1536]: time="2025-10-30T23:58:31.390359081Z" level=info msg="Container 56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:31.406009 containerd[1536]: time="2025-10-30T23:58:31.405969146Z" level=info msg="CreateContainer within sandbox \"b4c4d75e4866cf477038ee9f3b8a69a6390daa3fa6373a4ba08cce894b8253aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f\"" Oct 30 23:58:31.406924 containerd[1536]: time="2025-10-30T23:58:31.406897720Z" level=info msg="StartContainer for \"56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f\"" Oct 30 23:58:31.408260 containerd[1536]: time="2025-10-30T23:58:31.408233699Z" level=info msg="connecting to shim 56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f" address="unix:///run/containerd/s/fee94b0f5060912979c41f6806c71e0560961224a30836ef360200f19d6b8918" protocol=ttrpc version=3 Oct 30 23:58:31.439594 systemd[1]: Started cri-containerd-56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f.scope - libcontainer container 56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f. Oct 30 23:58:31.490588 containerd[1536]: time="2025-10-30T23:58:31.490520930Z" level=info msg="StartContainer for \"56cd4a4c93f4f92b2234ab69a5463dcf483f65de0b52eedda7913b8668937a5f\" returns successfully" Oct 30 23:58:32.206029 kubelet[2674]: E1030 23:58:32.205999 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.206564 kubelet[2674]: W1030 23:58:32.206417 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.206564 kubelet[2674]: E1030 23:58:32.206452 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.206873 kubelet[2674]: E1030 23:58:32.206636 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.206873 kubelet[2674]: W1030 23:58:32.206646 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.206873 kubelet[2674]: E1030 23:58:32.206687 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.207000 kubelet[2674]: E1030 23:58:32.206986 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.207056 kubelet[2674]: W1030 23:58:32.207045 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.207109 kubelet[2674]: E1030 23:58:32.207099 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.207341 kubelet[2674]: E1030 23:58:32.207328 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.207554 kubelet[2674]: W1030 23:58:32.207442 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.207554 kubelet[2674]: E1030 23:58:32.207463 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.207703 kubelet[2674]: E1030 23:58:32.207690 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.207757 kubelet[2674]: W1030 23:58:32.207747 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.207899 kubelet[2674]: E1030 23:58:32.207809 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.208001 kubelet[2674]: E1030 23:58:32.207989 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.208138 kubelet[2674]: W1030 23:58:32.208045 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.208138 kubelet[2674]: E1030 23:58:32.208060 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.208262 kubelet[2674]: E1030 23:58:32.208250 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.208316 kubelet[2674]: W1030 23:58:32.208306 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.208395 kubelet[2674]: E1030 23:58:32.208371 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.208739 kubelet[2674]: E1030 23:58:32.208628 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.208739 kubelet[2674]: W1030 23:58:32.208640 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.208739 kubelet[2674]: E1030 23:58:32.208653 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.208901 kubelet[2674]: E1030 23:58:32.208887 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.208966 kubelet[2674]: W1030 23:58:32.208954 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.209020 kubelet[2674]: E1030 23:58:32.209010 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.209312 kubelet[2674]: E1030 23:58:32.209226 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.209312 kubelet[2674]: W1030 23:58:32.209239 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.209312 kubelet[2674]: E1030 23:58:32.209248 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.209504 kubelet[2674]: E1030 23:58:32.209490 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.209713 kubelet[2674]: W1030 23:58:32.209564 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.209713 kubelet[2674]: E1030 23:58:32.209580 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.209871 kubelet[2674]: E1030 23:58:32.209847 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.209922 kubelet[2674]: W1030 23:58:32.209912 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.209976 kubelet[2674]: E1030 23:58:32.209966 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.210279 kubelet[2674]: E1030 23:58:32.210187 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.210279 kubelet[2674]: W1030 23:58:32.210200 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.210279 kubelet[2674]: E1030 23:58:32.210210 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.210569 kubelet[2674]: E1030 23:58:32.210460 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.210569 kubelet[2674]: W1030 23:58:32.210474 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.210569 kubelet[2674]: E1030 23:58:32.210484 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.210717 kubelet[2674]: E1030 23:58:32.210704 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.210774 kubelet[2674]: W1030 23:58:32.210762 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.210830 kubelet[2674]: E1030 23:58:32.210820 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.221171 kubelet[2674]: E1030 23:58:32.221053 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.221171 kubelet[2674]: W1030 23:58:32.221079 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.221171 kubelet[2674]: E1030 23:58:32.221096 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.221325 kubelet[2674]: E1030 23:58:32.221260 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.221325 kubelet[2674]: W1030 23:58:32.221268 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.221325 kubelet[2674]: E1030 23:58:32.221277 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.221457 kubelet[2674]: E1030 23:58:32.221431 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.221457 kubelet[2674]: W1030 23:58:32.221445 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.221457 kubelet[2674]: E1030 23:58:32.221456 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.221738 kubelet[2674]: E1030 23:58:32.221680 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.221738 kubelet[2674]: W1030 23:58:32.221695 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.221738 kubelet[2674]: E1030 23:58:32.221714 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.221915 kubelet[2674]: E1030 23:58:32.221899 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.221915 kubelet[2674]: W1030 23:58:32.221912 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.221969 kubelet[2674]: E1030 23:58:32.221928 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.222129 kubelet[2674]: E1030 23:58:32.222114 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.222161 kubelet[2674]: W1030 23:58:32.222129 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.222161 kubelet[2674]: E1030 23:58:32.222145 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.222587 kubelet[2674]: E1030 23:58:32.222545 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.222587 kubelet[2674]: W1030 23:58:32.222561 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.222587 kubelet[2674]: E1030 23:58:32.222579 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.223040 kubelet[2674]: E1030 23:58:32.222950 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.223040 kubelet[2674]: W1030 23:58:32.222965 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.223040 kubelet[2674]: E1030 23:58:32.222985 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.223219 kubelet[2674]: E1030 23:58:32.223126 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.223219 kubelet[2674]: W1030 23:58:32.223155 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.223219 kubelet[2674]: E1030 23:58:32.223183 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.223390 kubelet[2674]: E1030 23:58:32.223342 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.223390 kubelet[2674]: W1030 23:58:32.223354 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.223494 kubelet[2674]: E1030 23:58:32.223441 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.223676 kubelet[2674]: E1030 23:58:32.223633 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.223676 kubelet[2674]: W1030 23:58:32.223647 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.223676 kubelet[2674]: E1030 23:58:32.223662 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.224011 kubelet[2674]: E1030 23:58:32.223976 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.224011 kubelet[2674]: W1030 23:58:32.223994 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.224011 kubelet[2674]: E1030 23:58:32.224011 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.224282 kubelet[2674]: E1030 23:58:32.224262 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.224282 kubelet[2674]: W1030 23:58:32.224277 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.224364 kubelet[2674]: E1030 23:58:32.224292 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.224607 kubelet[2674]: E1030 23:58:32.224548 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.224607 kubelet[2674]: W1030 23:58:32.224562 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.224607 kubelet[2674]: E1030 23:58:32.224578 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.224911 kubelet[2674]: E1030 23:58:32.224861 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.224911 kubelet[2674]: W1030 23:58:32.224877 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.224911 kubelet[2674]: E1030 23:58:32.224892 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.225267 kubelet[2674]: E1030 23:58:32.225245 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.225345 kubelet[2674]: W1030 23:58:32.225331 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.226534 kubelet[2674]: E1030 23:58:32.226340 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.226615 kubelet[2674]: E1030 23:58:32.226592 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.226615 kubelet[2674]: W1030 23:58:32.226609 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.226665 kubelet[2674]: E1030 23:58:32.226626 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.226779 kubelet[2674]: E1030 23:58:32.226764 2674 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 23:58:32.226815 kubelet[2674]: W1030 23:58:32.226775 2674 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 23:58:32.226815 kubelet[2674]: E1030 23:58:32.226791 2674 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 23:58:32.264602 containerd[1536]: time="2025-10-30T23:58:32.264558658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:32.265263 containerd[1536]: time="2025-10-30T23:58:32.265103466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 30 23:58:32.266013 containerd[1536]: time="2025-10-30T23:58:32.265926397Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:32.268019 containerd[1536]: time="2025-10-30T23:58:32.267978986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:32.268790 containerd[1536]: time="2025-10-30T23:58:32.268754396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 906.437441ms" Oct 30 23:58:32.268790 containerd[1536]: time="2025-10-30T23:58:32.268786917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 30 23:58:32.271196 containerd[1536]: time="2025-10-30T23:58:32.271151910Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 23:58:32.282406 containerd[1536]: time="2025-10-30T23:58:32.281779897Z" level=info msg="Container 8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:32.283818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253547982.mount: Deactivated successfully. Oct 30 23:58:32.290654 containerd[1536]: time="2025-10-30T23:58:32.290595300Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\"" Oct 30 23:58:32.291140 containerd[1536]: time="2025-10-30T23:58:32.291112107Z" level=info msg="StartContainer for \"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\"" Oct 30 23:58:32.292962 containerd[1536]: time="2025-10-30T23:58:32.292932612Z" level=info msg="connecting to shim 8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0" address="unix:///run/containerd/s/8894074ac5bb9cb798adf84f0739aad9e331bef9fbdeb467c33d3d1b81012e75" protocol=ttrpc version=3 Oct 30 23:58:32.316617 systemd[1]: Started cri-containerd-8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0.scope - libcontainer container 8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0. Oct 30 23:58:32.363127 systemd[1]: cri-containerd-8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0.scope: Deactivated successfully. Oct 30 23:58:32.366040 containerd[1536]: time="2025-10-30T23:58:32.365991347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\" id:\"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\" pid:3428 exited_at:{seconds:1761868712 nanos:365495380}" Oct 30 23:58:32.392364 containerd[1536]: time="2025-10-30T23:58:32.392297312Z" level=info msg="received exit event container_id:\"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\" id:\"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\" pid:3428 exited_at:{seconds:1761868712 nanos:365495380}" Oct 30 23:58:32.394542 containerd[1536]: time="2025-10-30T23:58:32.394431822Z" level=info msg="StartContainer for \"8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0\" returns successfully" Oct 30 23:58:32.413730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a55e75803f5b80d5fcfc131b4a1fe7fcd2f349a2820d2ef222647d7be9992d0-rootfs.mount: Deactivated successfully. Oct 30 23:58:33.053024 kubelet[2674]: E1030 23:58:33.052958 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:33.142665 kubelet[2674]: I1030 23:58:33.142604 2674 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 23:58:33.146077 containerd[1536]: time="2025-10-30T23:58:33.146040422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 23:58:33.201605 kubelet[2674]: I1030 23:58:33.201547 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8578b9fd8f-whm5d" podStartSLOduration=2.741173559 podStartE2EDuration="4.201532722s" podCreationTimestamp="2025-10-30 23:58:29 +0000 UTC" firstStartedPulling="2025-10-30 23:58:29.901796349 +0000 UTC m=+22.938345085" lastFinishedPulling="2025-10-30 23:58:31.362155512 +0000 UTC m=+24.398704248" observedRunningTime="2025-10-30 23:58:32.157594212 +0000 UTC m=+25.194142948" watchObservedRunningTime="2025-10-30 23:58:33.201532722 +0000 UTC m=+26.238081458" Oct 30 23:58:35.053412 kubelet[2674]: E1030 23:58:35.053115 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:36.851442 containerd[1536]: time="2025-10-30T23:58:36.851032244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:36.851794 containerd[1536]: time="2025-10-30T23:58:36.851605691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 30 23:58:36.852545 containerd[1536]: time="2025-10-30T23:58:36.852519342Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:36.854508 containerd[1536]: time="2025-10-30T23:58:36.854472645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:36.855812 containerd[1536]: time="2025-10-30T23:58:36.855786660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.709701318s" Oct 30 23:58:36.855850 containerd[1536]: time="2025-10-30T23:58:36.855819901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 30 23:58:36.857580 containerd[1536]: time="2025-10-30T23:58:36.857548241Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 23:58:36.866561 containerd[1536]: time="2025-10-30T23:58:36.865228293Z" level=info msg="Container 32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:36.879111 containerd[1536]: time="2025-10-30T23:58:36.878912576Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\"" Oct 30 23:58:36.880663 containerd[1536]: time="2025-10-30T23:58:36.880608036Z" level=info msg="StartContainer for \"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\"" Oct 30 23:58:36.882878 containerd[1536]: time="2025-10-30T23:58:36.882833902Z" level=info msg="connecting to shim 32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05" address="unix:///run/containerd/s/8894074ac5bb9cb798adf84f0739aad9e331bef9fbdeb467c33d3d1b81012e75" protocol=ttrpc version=3 Oct 30 23:58:36.911623 systemd[1]: Started cri-containerd-32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05.scope - libcontainer container 32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05. Oct 30 23:58:36.948472 containerd[1536]: time="2025-10-30T23:58:36.948425363Z" level=info msg="StartContainer for \"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\" returns successfully" Oct 30 23:58:37.054420 kubelet[2674]: E1030 23:58:37.054093 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:37.530904 systemd[1]: cri-containerd-32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05.scope: Deactivated successfully. Oct 30 23:58:37.531232 systemd[1]: cri-containerd-32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05.scope: Consumed 460ms CPU time, 171.8M memory peak, 3.1M read from disk, 165.9M written to disk. Oct 30 23:58:37.533105 containerd[1536]: time="2025-10-30T23:58:37.533073695Z" level=info msg="received exit event container_id:\"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\" id:\"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\" pid:3489 exited_at:{seconds:1761868717 nanos:532902733}" Oct 30 23:58:37.533218 containerd[1536]: time="2025-10-30T23:58:37.533190216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\" id:\"32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05\" pid:3489 exited_at:{seconds:1761868717 nanos:532902733}" Oct 30 23:58:37.557335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32b3aed09b170db91ec084ce27f058e76643dd1f22fc933d3302416343908e05-rootfs.mount: Deactivated successfully. Oct 30 23:58:37.590943 kubelet[2674]: I1030 23:58:37.590905 2674 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 23:58:37.631253 kubelet[2674]: W1030 23:58:37.629825 2674 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Oct 30 23:58:37.641764 kubelet[2674]: E1030 23:58:37.641665 2674 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Oct 30 23:58:37.643498 systemd[1]: Created slice kubepods-besteffort-pod09999d1d_585b_4b92_bfcd_e8302c7a6bdf.slice - libcontainer container kubepods-besteffort-pod09999d1d_585b_4b92_bfcd_e8302c7a6bdf.slice. Oct 30 23:58:37.658663 systemd[1]: Created slice kubepods-burstable-podc1660b4c_0b9c_4abb_b46f_835991a79c24.slice - libcontainer container kubepods-burstable-podc1660b4c_0b9c_4abb_b46f_835991a79c24.slice. Oct 30 23:58:37.658930 kubelet[2674]: I1030 23:58:37.658885 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3085853b-0547-4b3b-8388-274e148caac5-whisker-ca-bundle\") pod \"whisker-66cc884b77-4sqd5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " pod="calico-system/whisker-66cc884b77-4sqd5" Oct 30 23:58:37.658930 kubelet[2674]: I1030 23:58:37.658929 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9077330-c4c7-4bab-b6ff-e93888d2d752-calico-apiserver-certs\") pod \"calico-apiserver-777569f766-kspc6\" (UID: \"e9077330-c4c7-4bab-b6ff-e93888d2d752\") " pod="calico-apiserver/calico-apiserver-777569f766-kspc6" Oct 30 23:58:37.659019 kubelet[2674]: I1030 23:58:37.658953 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkz6d\" (UniqueName: \"kubernetes.io/projected/09999d1d-585b-4b92-bfcd-e8302c7a6bdf-kube-api-access-rkz6d\") pod \"calico-kube-controllers-7797789cc7-9n2z5\" (UID: \"09999d1d-585b-4b92-bfcd-e8302c7a6bdf\") " pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" Oct 30 23:58:37.659019 kubelet[2674]: I1030 23:58:37.658975 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/af40a0eb-fde7-43c7-a00e-4d438056d56c-calico-apiserver-certs\") pod \"calico-apiserver-7bf97986c9-8575b\" (UID: \"af40a0eb-fde7-43c7-a00e-4d438056d56c\") " pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" Oct 30 23:58:37.659019 kubelet[2674]: I1030 23:58:37.658990 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09999d1d-585b-4b92-bfcd-e8302c7a6bdf-tigera-ca-bundle\") pod \"calico-kube-controllers-7797789cc7-9n2z5\" (UID: \"09999d1d-585b-4b92-bfcd-e8302c7a6bdf\") " pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" Oct 30 23:58:37.659019 kubelet[2674]: I1030 23:58:37.659005 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3085853b-0547-4b3b-8388-274e148caac5-whisker-backend-key-pair\") pod \"whisker-66cc884b77-4sqd5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " pod="calico-system/whisker-66cc884b77-4sqd5" Oct 30 23:58:37.659103 kubelet[2674]: I1030 23:58:37.659025 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96333c0e-7c74-4fa2-93be-d0ba59addf64-calico-apiserver-certs\") pod \"calico-apiserver-7bf97986c9-8z8d7\" (UID: \"96333c0e-7c74-4fa2-93be-d0ba59addf64\") " pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" Oct 30 23:58:37.659103 kubelet[2674]: I1030 23:58:37.659042 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1660b4c-0b9c-4abb-b46f-835991a79c24-config-volume\") pod \"coredns-668d6bf9bc-qqbjx\" (UID: \"c1660b4c-0b9c-4abb-b46f-835991a79c24\") " pod="kube-system/coredns-668d6bf9bc-qqbjx" Oct 30 23:58:37.659103 kubelet[2674]: I1030 23:58:37.659059 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjzrh\" (UniqueName: \"kubernetes.io/projected/c1660b4c-0b9c-4abb-b46f-835991a79c24-kube-api-access-gjzrh\") pod \"coredns-668d6bf9bc-qqbjx\" (UID: \"c1660b4c-0b9c-4abb-b46f-835991a79c24\") " pod="kube-system/coredns-668d6bf9bc-qqbjx" Oct 30 23:58:37.659103 kubelet[2674]: I1030 23:58:37.659076 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6ce2bdb-00f7-424e-b2d2-474513bfd59b-config-volume\") pod \"coredns-668d6bf9bc-9pq2k\" (UID: \"b6ce2bdb-00f7-424e-b2d2-474513bfd59b\") " pod="kube-system/coredns-668d6bf9bc-9pq2k" Oct 30 23:58:37.659103 kubelet[2674]: I1030 23:58:37.659091 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86wx\" (UniqueName: \"kubernetes.io/projected/b6ce2bdb-00f7-424e-b2d2-474513bfd59b-kube-api-access-q86wx\") pod \"coredns-668d6bf9bc-9pq2k\" (UID: \"b6ce2bdb-00f7-424e-b2d2-474513bfd59b\") " pod="kube-system/coredns-668d6bf9bc-9pq2k" Oct 30 23:58:37.659202 kubelet[2674]: I1030 23:58:37.659152 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btg9k\" (UniqueName: \"kubernetes.io/projected/e9077330-c4c7-4bab-b6ff-e93888d2d752-kube-api-access-btg9k\") pod \"calico-apiserver-777569f766-kspc6\" (UID: \"e9077330-c4c7-4bab-b6ff-e93888d2d752\") " pod="calico-apiserver/calico-apiserver-777569f766-kspc6" Oct 30 23:58:37.659202 kubelet[2674]: I1030 23:58:37.659175 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdbg\" (UniqueName: \"kubernetes.io/projected/96333c0e-7c74-4fa2-93be-d0ba59addf64-kube-api-access-2qdbg\") pod \"calico-apiserver-7bf97986c9-8z8d7\" (UID: \"96333c0e-7c74-4fa2-93be-d0ba59addf64\") " pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" Oct 30 23:58:37.659245 kubelet[2674]: I1030 23:58:37.659201 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4279\" (UniqueName: \"kubernetes.io/projected/3085853b-0547-4b3b-8388-274e148caac5-kube-api-access-h4279\") pod \"whisker-66cc884b77-4sqd5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " pod="calico-system/whisker-66cc884b77-4sqd5" Oct 30 23:58:37.659245 kubelet[2674]: I1030 23:58:37.659233 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85jtx\" (UniqueName: \"kubernetes.io/projected/af40a0eb-fde7-43c7-a00e-4d438056d56c-kube-api-access-85jtx\") pod \"calico-apiserver-7bf97986c9-8575b\" (UID: \"af40a0eb-fde7-43c7-a00e-4d438056d56c\") " pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" Oct 30 23:58:37.668738 systemd[1]: Created slice kubepods-burstable-podb6ce2bdb_00f7_424e_b2d2_474513bfd59b.slice - libcontainer container kubepods-burstable-podb6ce2bdb_00f7_424e_b2d2_474513bfd59b.slice. Oct 30 23:58:37.676256 systemd[1]: Created slice kubepods-besteffort-pod3085853b_0547_4b3b_8388_274e148caac5.slice - libcontainer container kubepods-besteffort-pod3085853b_0547_4b3b_8388_274e148caac5.slice. Oct 30 23:58:37.683257 systemd[1]: Created slice kubepods-besteffort-pod96333c0e_7c74_4fa2_93be_d0ba59addf64.slice - libcontainer container kubepods-besteffort-pod96333c0e_7c74_4fa2_93be_d0ba59addf64.slice. Oct 30 23:58:37.690865 systemd[1]: Created slice kubepods-besteffort-pode9077330_c4c7_4bab_b6ff_e93888d2d752.slice - libcontainer container kubepods-besteffort-pode9077330_c4c7_4bab_b6ff_e93888d2d752.slice. Oct 30 23:58:37.696238 systemd[1]: Created slice kubepods-besteffort-podaf40a0eb_fde7_43c7_a00e_4d438056d56c.slice - libcontainer container kubepods-besteffort-podaf40a0eb_fde7_43c7_a00e_4d438056d56c.slice. Oct 30 23:58:37.702467 systemd[1]: Created slice kubepods-besteffort-podb6340a74_f3fe_42f0_a5b5_631bdd75d561.slice - libcontainer container kubepods-besteffort-podb6340a74_f3fe_42f0_a5b5_631bdd75d561.slice. Oct 30 23:58:37.760128 kubelet[2674]: I1030 23:58:37.760086 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b6340a74-f3fe-42f0-a5b5-631bdd75d561-goldmane-key-pair\") pod \"goldmane-666569f655-nqdgd\" (UID: \"b6340a74-f3fe-42f0-a5b5-631bdd75d561\") " pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:37.760415 kubelet[2674]: I1030 23:58:37.760369 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj8d\" (UniqueName: \"kubernetes.io/projected/b6340a74-f3fe-42f0-a5b5-631bdd75d561-kube-api-access-2hj8d\") pod \"goldmane-666569f655-nqdgd\" (UID: \"b6340a74-f3fe-42f0-a5b5-631bdd75d561\") " pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:37.760520 kubelet[2674]: I1030 23:58:37.760506 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6340a74-f3fe-42f0-a5b5-631bdd75d561-config\") pod \"goldmane-666569f655-nqdgd\" (UID: \"b6340a74-f3fe-42f0-a5b5-631bdd75d561\") " pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:37.760598 kubelet[2674]: I1030 23:58:37.760584 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6340a74-f3fe-42f0-a5b5-631bdd75d561-goldmane-ca-bundle\") pod \"goldmane-666569f655-nqdgd\" (UID: \"b6340a74-f3fe-42f0-a5b5-631bdd75d561\") " pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:37.954501 containerd[1536]: time="2025-10-30T23:58:37.954463212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797789cc7-9n2z5,Uid:09999d1d-585b-4b92-bfcd-e8302c7a6bdf,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:37.965992 containerd[1536]: time="2025-10-30T23:58:37.965949583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqbjx,Uid:c1660b4c-0b9c-4abb-b46f-835991a79c24,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:37.973717 containerd[1536]: time="2025-10-30T23:58:37.973679512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pq2k,Uid:b6ce2bdb-00f7-424e-b2d2-474513bfd59b,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:37.982133 containerd[1536]: time="2025-10-30T23:58:37.982095329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc884b77-4sqd5,Uid:3085853b-0547-4b3b-8388-274e148caac5,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:38.006352 containerd[1536]: time="2025-10-30T23:58:38.006307844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nqdgd,Uid:b6340a74-f3fe-42f0-a5b5-631bdd75d561,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:38.057513 containerd[1536]: time="2025-10-30T23:58:38.057465891Z" level=error msg="Failed to destroy network for sandbox \"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.059307 containerd[1536]: time="2025-10-30T23:58:38.059261951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797789cc7-9n2z5,Uid:09999d1d-585b-4b92-bfcd-e8302c7a6bdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.061820 containerd[1536]: time="2025-10-30T23:58:38.061779619Z" level=error msg="Failed to destroy network for sandbox \"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.062504 containerd[1536]: time="2025-10-30T23:58:38.062469227Z" level=error msg="Failed to destroy network for sandbox \"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.063059 containerd[1536]: time="2025-10-30T23:58:38.063029113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cc884b77-4sqd5,Uid:3085853b-0547-4b3b-8388-274e148caac5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.064044 kubelet[2674]: E1030 23:58:38.063943 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.064460 kubelet[2674]: E1030 23:58:38.064073 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cc884b77-4sqd5" Oct 30 23:58:38.064460 kubelet[2674]: E1030 23:58:38.064094 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66cc884b77-4sqd5" Oct 30 23:58:38.064460 kubelet[2674]: E1030 23:58:38.064165 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66cc884b77-4sqd5_calico-system(3085853b-0547-4b3b-8388-274e148caac5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66cc884b77-4sqd5_calico-system(3085853b-0547-4b3b-8388-274e148caac5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6df2c167f459e3d271c5cef8981bf97057a7aebdfb3d2f9a39951a1d275253f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66cc884b77-4sqd5" podUID="3085853b-0547-4b3b-8388-274e148caac5" Oct 30 23:58:38.065239 containerd[1536]: time="2025-10-30T23:58:38.065180417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pq2k,Uid:b6ce2bdb-00f7-424e-b2d2-474513bfd59b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.065584 kubelet[2674]: E1030 23:58:38.065360 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.065584 kubelet[2674]: E1030 23:58:38.065425 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9pq2k" Oct 30 23:58:38.065584 kubelet[2674]: E1030 23:58:38.065444 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9pq2k" Oct 30 23:58:38.065671 kubelet[2674]: E1030 23:58:38.065475 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9pq2k_kube-system(b6ce2bdb-00f7-424e-b2d2-474513bfd59b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9pq2k_kube-system(b6ce2bdb-00f7-424e-b2d2-474513bfd59b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1818f99af4c2baf2d66ae640e85ca438a71b34ae3ca574c804ecdf65f9a43f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9pq2k" podUID="b6ce2bdb-00f7-424e-b2d2-474513bfd59b" Oct 30 23:58:38.067248 kubelet[2674]: E1030 23:58:38.067171 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.067248 kubelet[2674]: E1030 23:58:38.067238 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" Oct 30 23:58:38.067345 kubelet[2674]: E1030 23:58:38.067256 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" Oct 30 23:58:38.067345 kubelet[2674]: E1030 23:58:38.067303 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7797789cc7-9n2z5_calico-system(09999d1d-585b-4b92-bfcd-e8302c7a6bdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7797789cc7-9n2z5_calico-system(09999d1d-585b-4b92-bfcd-e8302c7a6bdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5967d9a143d2019e0ec21b53552466387cb0b1a2851e3684afb32eda0c244a7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:58:38.070753 containerd[1536]: time="2025-10-30T23:58:38.070719318Z" level=error msg="Failed to destroy network for sandbox \"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.072517 containerd[1536]: time="2025-10-30T23:58:38.072465017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqbjx,Uid:c1660b4c-0b9c-4abb-b46f-835991a79c24,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.072699 kubelet[2674]: E1030 23:58:38.072659 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.072757 kubelet[2674]: E1030 23:58:38.072712 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qqbjx" Oct 30 23:58:38.072757 kubelet[2674]: E1030 23:58:38.072730 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qqbjx" Oct 30 23:58:38.072810 kubelet[2674]: E1030 23:58:38.072766 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qqbjx_kube-system(c1660b4c-0b9c-4abb-b46f-835991a79c24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qqbjx_kube-system(c1660b4c-0b9c-4abb-b46f-835991a79c24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39ce17df168e276cecef1a09cac4778774ba06d91470958daaa54b1f83441ad6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qqbjx" podUID="c1660b4c-0b9c-4abb-b46f-835991a79c24" Oct 30 23:58:38.083027 containerd[1536]: time="2025-10-30T23:58:38.082972734Z" level=error msg="Failed to destroy network for sandbox \"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.084143 containerd[1536]: time="2025-10-30T23:58:38.084092306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nqdgd,Uid:b6340a74-f3fe-42f0-a5b5-631bdd75d561,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.084387 kubelet[2674]: E1030 23:58:38.084344 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:38.084449 kubelet[2674]: E1030 23:58:38.084421 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:38.084449 kubelet[2674]: E1030 23:58:38.084441 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nqdgd" Oct 30 23:58:38.084498 kubelet[2674]: E1030 23:58:38.084478 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nqdgd_calico-system(b6340a74-f3fe-42f0-a5b5-631bdd75d561)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nqdgd_calico-system(b6340a74-f3fe-42f0-a5b5-631bdd75d561)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30d6000206319f0820fe96ca656cd06ed05a08fae136791c69e3c701b139030b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:58:38.157764 containerd[1536]: time="2025-10-30T23:58:38.157725642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 23:58:38.790568 kubelet[2674]: E1030 23:58:38.790501 2674 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.790692 kubelet[2674]: E1030 23:58:38.790505 2674 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.791326 kubelet[2674]: E1030 23:58:38.790645 2674 projected.go:194] Error preparing data for projected volume kube-api-access-2qdbg for pod calico-apiserver/calico-apiserver-7bf97986c9-8z8d7: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.791455 kubelet[2674]: E1030 23:58:38.791429 2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96333c0e-7c74-4fa2-93be-d0ba59addf64-kube-api-access-2qdbg podName:96333c0e-7c74-4fa2-93be-d0ba59addf64 nodeName:}" failed. No retries permitted until 2025-10-30 23:58:39.291375344 +0000 UTC m=+32.327924080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2qdbg" (UniqueName: "kubernetes.io/projected/96333c0e-7c74-4fa2-93be-d0ba59addf64-kube-api-access-2qdbg") pod "calico-apiserver-7bf97986c9-8z8d7" (UID: "96333c0e-7c74-4fa2-93be-d0ba59addf64") : failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.791616 kubelet[2674]: E1030 23:58:38.791598 2674 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.791658 kubelet[2674]: E1030 23:58:38.791619 2674 projected.go:194] Error preparing data for projected volume kube-api-access-btg9k for pod calico-apiserver/calico-apiserver-777569f766-kspc6: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.791682 kubelet[2674]: E1030 23:58:38.791664 2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9077330-c4c7-4bab-b6ff-e93888d2d752-kube-api-access-btg9k podName:e9077330-c4c7-4bab-b6ff-e93888d2d752 nodeName:}" failed. No retries permitted until 2025-10-30 23:58:39.291654147 +0000 UTC m=+32.328202883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-btg9k" (UniqueName: "kubernetes.io/projected/e9077330-c4c7-4bab-b6ff-e93888d2d752-kube-api-access-btg9k") pod "calico-apiserver-777569f766-kspc6" (UID: "e9077330-c4c7-4bab-b6ff-e93888d2d752") : failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.792581 kubelet[2674]: E1030 23:58:38.790553 2674 projected.go:194] Error preparing data for projected volume kube-api-access-85jtx for pod calico-apiserver/calico-apiserver-7bf97986c9-8575b: failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.792693 kubelet[2674]: E1030 23:58:38.792667 2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/af40a0eb-fde7-43c7-a00e-4d438056d56c-kube-api-access-85jtx podName:af40a0eb-fde7-43c7-a00e-4d438056d56c nodeName:}" failed. No retries permitted until 2025-10-30 23:58:39.292646798 +0000 UTC m=+32.329195534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-85jtx" (UniqueName: "kubernetes.io/projected/af40a0eb-fde7-43c7-a00e-4d438056d56c-kube-api-access-85jtx") pod "calico-apiserver-7bf97986c9-8575b" (UID: "af40a0eb-fde7-43c7-a00e-4d438056d56c") : failed to sync configmap cache: timed out waiting for the condition Oct 30 23:58:38.866899 systemd[1]: run-netns-cni\x2dd2e2f502\x2d783b\x2daa09\x2df0db\x2df19c0257d051.mount: Deactivated successfully. Oct 30 23:58:38.866983 systemd[1]: run-netns-cni\x2db7fb2353\x2d1693\x2dc922\x2d6beb\x2d4c02aa16e634.mount: Deactivated successfully. Oct 30 23:58:38.867025 systemd[1]: run-netns-cni\x2d750439a3\x2deffb\x2dabf6\x2d039c\x2d7f383bce6402.mount: Deactivated successfully. Oct 30 23:58:39.062846 systemd[1]: Created slice kubepods-besteffort-pod7aeb131b_e092_42f8_a46c_9c20bc9f295e.slice - libcontainer container kubepods-besteffort-pod7aeb131b_e092_42f8_a46c_9c20bc9f295e.slice. Oct 30 23:58:39.073599 containerd[1536]: time="2025-10-30T23:58:39.073559284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kc6vc,Uid:7aeb131b-e092-42f8-a46c-9c20bc9f295e,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:39.155741 containerd[1536]: time="2025-10-30T23:58:39.155695644Z" level=error msg="Failed to destroy network for sandbox \"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.157835 systemd[1]: run-netns-cni\x2d806273fb\x2d4906\x2d9613\x2dc323\x2d6d7da962de3a.mount: Deactivated successfully. Oct 30 23:58:39.201624 containerd[1536]: time="2025-10-30T23:58:39.201563135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kc6vc,Uid:7aeb131b-e092-42f8-a46c-9c20bc9f295e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.201877 kubelet[2674]: E1030 23:58:39.201802 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.201877 kubelet[2674]: E1030 23:58:39.201870 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:39.203233 kubelet[2674]: E1030 23:58:39.201890 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kc6vc" Oct 30 23:58:39.203233 kubelet[2674]: E1030 23:58:39.202023 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96d1d54193449eebf3dd2c3a188ac5f6adcfcf739ca1495cb173122f36ebd415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:39.488419 containerd[1536]: time="2025-10-30T23:58:39.487985443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8z8d7,Uid:96333c0e-7c74-4fa2-93be-d0ba59addf64,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:39.495184 containerd[1536]: time="2025-10-30T23:58:39.495128999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777569f766-kspc6,Uid:e9077330-c4c7-4bab-b6ff-e93888d2d752,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:39.501127 containerd[1536]: time="2025-10-30T23:58:39.501089623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8575b,Uid:af40a0eb-fde7-43c7-a00e-4d438056d56c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:39.563543 containerd[1536]: time="2025-10-30T23:58:39.563451371Z" level=error msg="Failed to destroy network for sandbox \"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.566529 containerd[1536]: time="2025-10-30T23:58:39.566468283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8z8d7,Uid:96333c0e-7c74-4fa2-93be-d0ba59addf64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.567351 kubelet[2674]: E1030 23:58:39.566880 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.567351 kubelet[2674]: E1030 23:58:39.566936 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" Oct 30 23:58:39.567351 kubelet[2674]: E1030 23:58:39.566955 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" Oct 30 23:58:39.567583 kubelet[2674]: E1030 23:58:39.566993 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bf97986c9-8z8d7_calico-apiserver(96333c0e-7c74-4fa2-93be-d0ba59addf64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bf97986c9-8z8d7_calico-apiserver(96333c0e-7c74-4fa2-93be-d0ba59addf64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23cc387b71941e9a6ebd9c82f65f1d79bc52f199057748aa3e42830ee9c17497\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:58:39.572290 containerd[1536]: time="2025-10-30T23:58:39.572213065Z" level=error msg="Failed to destroy network for sandbox \"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.572389 containerd[1536]: time="2025-10-30T23:58:39.572227265Z" level=error msg="Failed to destroy network for sandbox \"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.573222 containerd[1536]: time="2025-10-30T23:58:39.573183835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777569f766-kspc6,Uid:e9077330-c4c7-4bab-b6ff-e93888d2d752,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.573613 kubelet[2674]: E1030 23:58:39.573469 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.574104 kubelet[2674]: E1030 23:58:39.573995 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" Oct 30 23:58:39.574547 kubelet[2674]: E1030 23:58:39.574306 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" Oct 30 23:58:39.575564 kubelet[2674]: E1030 23:58:39.574363 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-777569f766-kspc6_calico-apiserver(e9077330-c4c7-4bab-b6ff-e93888d2d752)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-777569f766-kspc6_calico-apiserver(e9077330-c4c7-4bab-b6ff-e93888d2d752)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead2933baef9f030f0683ed27e3ccd3e08b1f08af086acc492d9ee0a50fb7878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:58:39.576326 containerd[1536]: time="2025-10-30T23:58:39.575920065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8575b,Uid:af40a0eb-fde7-43c7-a00e-4d438056d56c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.576661 kubelet[2674]: E1030 23:58:39.576489 2674 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 23:58:39.576661 kubelet[2674]: E1030 23:58:39.576524 2674 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" Oct 30 23:58:39.576661 kubelet[2674]: E1030 23:58:39.576537 2674 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" Oct 30 23:58:39.576885 kubelet[2674]: E1030 23:58:39.576569 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bf97986c9-8575b_calico-apiserver(af40a0eb-fde7-43c7-a00e-4d438056d56c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bf97986c9-8575b_calico-apiserver(af40a0eb-fde7-43c7-a00e-4d438056d56c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dfccdc2fbfa7b7f0470ef74f71b83724c33dd55ba50b48137c2fbddc925895f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:58:41.039430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463123262.mount: Deactivated successfully. Oct 30 23:58:41.314515 containerd[1536]: time="2025-10-30T23:58:41.314212724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:41.314869 containerd[1536]: time="2025-10-30T23:58:41.314837370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 30 23:58:41.315587 containerd[1536]: time="2025-10-30T23:58:41.315556857Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:41.328617 containerd[1536]: time="2025-10-30T23:58:41.328545267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 23:58:41.329335 containerd[1536]: time="2025-10-30T23:58:41.328955351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.171190268s" Oct 30 23:58:41.329335 containerd[1536]: time="2025-10-30T23:58:41.328983992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 30 23:58:41.341497 containerd[1536]: time="2025-10-30T23:58:41.341462757Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 23:58:41.357475 containerd[1536]: time="2025-10-30T23:58:41.356551308Z" level=info msg="Container 04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:41.358429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640412643.mount: Deactivated successfully. Oct 30 23:58:41.370246 containerd[1536]: time="2025-10-30T23:58:41.370192325Z" level=info msg="CreateContainer within sandbox \"12db372a55971a7ba297af55ca808a23d2a0f76a00f699263e87eb3c9351d3d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\"" Oct 30 23:58:41.372780 containerd[1536]: time="2025-10-30T23:58:41.372752071Z" level=info msg="StartContainer for \"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\"" Oct 30 23:58:41.374500 containerd[1536]: time="2025-10-30T23:58:41.374467048Z" level=info msg="connecting to shim 04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c" address="unix:///run/containerd/s/8894074ac5bb9cb798adf84f0739aad9e331bef9fbdeb467c33d3d1b81012e75" protocol=ttrpc version=3 Oct 30 23:58:41.401562 systemd[1]: Started cri-containerd-04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c.scope - libcontainer container 04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c. Oct 30 23:58:41.440156 containerd[1536]: time="2025-10-30T23:58:41.440116787Z" level=info msg="StartContainer for \"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\" returns successfully" Oct 30 23:58:41.554357 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 23:58:41.554467 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 23:58:41.694055 kubelet[2674]: I1030 23:58:41.693924 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4279\" (UniqueName: \"kubernetes.io/projected/3085853b-0547-4b3b-8388-274e148caac5-kube-api-access-h4279\") pod \"3085853b-0547-4b3b-8388-274e148caac5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " Oct 30 23:58:41.694055 kubelet[2674]: I1030 23:58:41.693987 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3085853b-0547-4b3b-8388-274e148caac5-whisker-ca-bundle\") pod \"3085853b-0547-4b3b-8388-274e148caac5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " Oct 30 23:58:41.694055 kubelet[2674]: I1030 23:58:41.694016 2674 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3085853b-0547-4b3b-8388-274e148caac5-whisker-backend-key-pair\") pod \"3085853b-0547-4b3b-8388-274e148caac5\" (UID: \"3085853b-0547-4b3b-8388-274e148caac5\") " Oct 30 23:58:41.698018 kubelet[2674]: I1030 23:58:41.697871 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3085853b-0547-4b3b-8388-274e148caac5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3085853b-0547-4b3b-8388-274e148caac5" (UID: "3085853b-0547-4b3b-8388-274e148caac5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 23:58:41.705277 kubelet[2674]: I1030 23:58:41.705223 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3085853b-0547-4b3b-8388-274e148caac5-kube-api-access-h4279" (OuterVolumeSpecName: "kube-api-access-h4279") pod "3085853b-0547-4b3b-8388-274e148caac5" (UID: "3085853b-0547-4b3b-8388-274e148caac5"). InnerVolumeSpecName "kube-api-access-h4279". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 23:58:41.705374 kubelet[2674]: I1030 23:58:41.705351 2674 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3085853b-0547-4b3b-8388-274e148caac5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3085853b-0547-4b3b-8388-274e148caac5" (UID: "3085853b-0547-4b3b-8388-274e148caac5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 23:58:41.794586 kubelet[2674]: I1030 23:58:41.794538 2674 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4279\" (UniqueName: \"kubernetes.io/projected/3085853b-0547-4b3b-8388-274e148caac5-kube-api-access-h4279\") on node \"localhost\" DevicePath \"\"" Oct 30 23:58:41.794586 kubelet[2674]: I1030 23:58:41.794575 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3085853b-0547-4b3b-8388-274e148caac5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 23:58:41.794586 kubelet[2674]: I1030 23:58:41.794586 2674 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3085853b-0547-4b3b-8388-274e148caac5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 23:58:42.040368 systemd[1]: var-lib-kubelet-pods-3085853b\x2d0547\x2d4b3b\x2d8388\x2d274e148caac5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh4279.mount: Deactivated successfully. Oct 30 23:58:42.040488 systemd[1]: var-lib-kubelet-pods-3085853b\x2d0547\x2d4b3b\x2d8388\x2d274e148caac5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 23:58:42.182936 systemd[1]: Removed slice kubepods-besteffort-pod3085853b_0547_4b3b_8388_274e148caac5.slice - libcontainer container kubepods-besteffort-pod3085853b_0547_4b3b_8388_274e148caac5.slice. Oct 30 23:58:42.194435 kubelet[2674]: I1030 23:58:42.194337 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l9pp2" podStartSLOduration=1.9208000840000001 podStartE2EDuration="13.194320578s" podCreationTimestamp="2025-10-30 23:58:29 +0000 UTC" firstStartedPulling="2025-10-30 23:58:30.056031303 +0000 UTC m=+23.092580039" lastFinishedPulling="2025-10-30 23:58:41.329551797 +0000 UTC m=+34.366100533" observedRunningTime="2025-10-30 23:58:42.193855573 +0000 UTC m=+35.230404309" watchObservedRunningTime="2025-10-30 23:58:42.194320578 +0000 UTC m=+35.230869314" Oct 30 23:58:42.262814 systemd[1]: Created slice kubepods-besteffort-pod31db6031_8469_4cd5_9b2b_86eba32dc2d4.slice - libcontainer container kubepods-besteffort-pod31db6031_8469_4cd5_9b2b_86eba32dc2d4.slice. Oct 30 23:58:42.297204 kubelet[2674]: I1030 23:58:42.297082 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31db6031-8469-4cd5-9b2b-86eba32dc2d4-whisker-backend-key-pair\") pod \"whisker-64784994b5-czbfj\" (UID: \"31db6031-8469-4cd5-9b2b-86eba32dc2d4\") " pod="calico-system/whisker-64784994b5-czbfj" Oct 30 23:58:42.297204 kubelet[2674]: I1030 23:58:42.297133 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31db6031-8469-4cd5-9b2b-86eba32dc2d4-whisker-ca-bundle\") pod \"whisker-64784994b5-czbfj\" (UID: \"31db6031-8469-4cd5-9b2b-86eba32dc2d4\") " pod="calico-system/whisker-64784994b5-czbfj" Oct 30 23:58:42.297204 kubelet[2674]: I1030 23:58:42.297150 2674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7pz\" (UniqueName: \"kubernetes.io/projected/31db6031-8469-4cd5-9b2b-86eba32dc2d4-kube-api-access-4q7pz\") pod \"whisker-64784994b5-czbfj\" (UID: \"31db6031-8469-4cd5-9b2b-86eba32dc2d4\") " pod="calico-system/whisker-64784994b5-czbfj" Oct 30 23:58:42.318077 containerd[1536]: time="2025-10-30T23:58:42.318018942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\" id:\"63a00ac5160224a58823afa3ecf62cc05d5ecf8501b668e5d8293e7b73b77b78\" pid:3910 exit_status:1 exited_at:{seconds:1761868722 nanos:317685978}" Oct 30 23:58:42.568356 containerd[1536]: time="2025-10-30T23:58:42.568052175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64784994b5-czbfj,Uid:31db6031-8469-4cd5-9b2b-86eba32dc2d4,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:42.737003 systemd-networkd[1434]: caliab80f23cee2: Link UP Oct 30 23:58:42.737532 systemd-networkd[1434]: caliab80f23cee2: Gained carrier Oct 30 23:58:42.751230 containerd[1536]: 2025-10-30 23:58:42.589 [INFO][3923] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 23:58:42.751230 containerd[1536]: 2025-10-30 23:58:42.619 [INFO][3923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64784994b5--czbfj-eth0 whisker-64784994b5- calico-system 31db6031-8469-4cd5-9b2b-86eba32dc2d4 892 0 2025-10-30 23:58:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64784994b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64784994b5-czbfj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliab80f23cee2 [] [] }} ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-" Oct 30 23:58:42.751230 containerd[1536]: 2025-10-30 23:58:42.619 [INFO][3923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751230 containerd[1536]: 2025-10-30 23:58:42.691 [INFO][3938] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" HandleID="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Workload="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.692 [INFO][3938] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" HandleID="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Workload="localhost-k8s-whisker--64784994b5--czbfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b2150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64784994b5-czbfj", "timestamp":"2025-10-30 23:58:42.691796099 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.692 [INFO][3938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.692 [INFO][3938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.692 [INFO][3938] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.704 [INFO][3938] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" host="localhost" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.709 [INFO][3938] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.713 [INFO][3938] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.715 [INFO][3938] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.717 [INFO][3938] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:42.751511 containerd[1536]: 2025-10-30 23:58:42.717 [INFO][3938] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" host="localhost" Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.718 [INFO][3938] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.722 [INFO][3938] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" host="localhost" Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.727 [INFO][3938] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" host="localhost" Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.727 [INFO][3938] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" host="localhost" Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.727 [INFO][3938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:42.751706 containerd[1536]: 2025-10-30 23:58:42.727 [INFO][3938] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" HandleID="k8s-pod-network.7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Workload="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751828 containerd[1536]: 2025-10-30 23:58:42.730 [INFO][3923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64784994b5--czbfj-eth0", GenerateName:"whisker-64784994b5-", Namespace:"calico-system", SelfLink:"", UID:"31db6031-8469-4cd5-9b2b-86eba32dc2d4", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64784994b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64784994b5-czbfj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab80f23cee2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:42.751828 containerd[1536]: 2025-10-30 23:58:42.730 [INFO][3923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751896 containerd[1536]: 2025-10-30 23:58:42.730 [INFO][3923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab80f23cee2 ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751896 containerd[1536]: 2025-10-30 23:58:42.738 [INFO][3923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.751935 containerd[1536]: 2025-10-30 23:58:42.738 [INFO][3923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64784994b5--czbfj-eth0", GenerateName:"whisker-64784994b5-", Namespace:"calico-system", SelfLink:"", UID:"31db6031-8469-4cd5-9b2b-86eba32dc2d4", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64784994b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f", Pod:"whisker-64784994b5-czbfj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab80f23cee2", MAC:"46:80:ff:98:a2:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:42.751980 containerd[1536]: 2025-10-30 23:58:42.748 [INFO][3923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" Namespace="calico-system" Pod="whisker-64784994b5-czbfj" WorkloadEndpoint="localhost-k8s-whisker--64784994b5--czbfj-eth0" Oct 30 23:58:42.853294 containerd[1536]: time="2025-10-30T23:58:42.852614624Z" level=info msg="connecting to shim 7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f" address="unix:///run/containerd/s/4643edc2fe6dcd3d5cc4e93cc917976896c507ef587facca9aa4db3c601e019f" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:42.897320 systemd[1]: Started cri-containerd-7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f.scope - libcontainer container 7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f. Oct 30 23:58:42.914641 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:42.958696 containerd[1536]: time="2025-10-30T23:58:42.958655975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64784994b5-czbfj,Uid:31db6031-8469-4cd5-9b2b-86eba32dc2d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ce3c165187c9b94f63e791bfaeebe3f44712e6e2e297440d9daa5babb0f0e7f\"" Oct 30 23:58:42.963530 containerd[1536]: time="2025-10-30T23:58:42.963489263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 23:58:43.057752 kubelet[2674]: I1030 23:58:43.057650 2674 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3085853b-0547-4b3b-8388-274e148caac5" path="/var/lib/kubelet/pods/3085853b-0547-4b3b-8388-274e148caac5/volumes" Oct 30 23:58:43.178649 containerd[1536]: time="2025-10-30T23:58:43.178527584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:43.181131 containerd[1536]: time="2025-10-30T23:58:43.181076688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 23:58:43.181131 containerd[1536]: time="2025-10-30T23:58:43.181115968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 23:58:43.181336 kubelet[2674]: E1030 23:58:43.181266 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:58:43.181336 kubelet[2674]: E1030 23:58:43.181318 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:58:43.184036 kubelet[2674]: E1030 23:58:43.183985 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:383267d036d54bedba55c947e586c654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:43.185992 containerd[1536]: time="2025-10-30T23:58:43.185960694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 23:58:43.269421 containerd[1536]: time="2025-10-30T23:58:43.268789636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\" id:\"48cde90ff04bdd9b02b84350160e12be473ae3262c95e4f6704403fc33dc32ee\" pid:4136 exit_status:1 exited_at:{seconds:1761868723 nanos:268511514}" Oct 30 23:58:43.286702 systemd-networkd[1434]: vxlan.calico: Link UP Oct 30 23:58:43.286708 systemd-networkd[1434]: vxlan.calico: Gained carrier Oct 30 23:58:43.399442 containerd[1536]: time="2025-10-30T23:58:43.399373710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:43.400636 containerd[1536]: time="2025-10-30T23:58:43.400490440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 23:58:43.400636 containerd[1536]: time="2025-10-30T23:58:43.400582041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 23:58:43.401401 kubelet[2674]: E1030 23:58:43.400723 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:58:43.401401 kubelet[2674]: E1030 23:58:43.400794 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:58:43.401519 kubelet[2674]: E1030 23:58:43.400909 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:43.402099 kubelet[2674]: E1030 23:58:43.402064 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64784994b5-czbfj" podUID="31db6031-8469-4cd5-9b2b-86eba32dc2d4" Oct 30 23:58:44.185913 kubelet[2674]: E1030 23:58:44.185723 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64784994b5-czbfj" podUID="31db6031-8469-4cd5-9b2b-86eba32dc2d4" Oct 30 23:58:44.261169 containerd[1536]: time="2025-10-30T23:58:44.261072018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\" id:\"b6c57aca3c130381cc4cea1c54e12d09c2a63762defe59d9406174560b19d203\" pid:4237 exit_status:1 exited_at:{seconds:1761868724 nanos:260388411}" Oct 30 23:58:44.723551 systemd-networkd[1434]: caliab80f23cee2: Gained IPv6LL Oct 30 23:58:45.235541 systemd-networkd[1434]: vxlan.calico: Gained IPv6LL Oct 30 23:58:47.137924 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:33066.service - OpenSSH per-connection server daemon (10.0.0.1:33066). Oct 30 23:58:47.211152 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 33066 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:58:47.212530 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:58:47.216426 systemd-logind[1507]: New session 8 of user core. Oct 30 23:58:47.223530 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 23:58:47.359343 sshd[4260]: Connection closed by 10.0.0.1 port 33066 Oct 30 23:58:47.359911 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:47.363340 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:33066.service: Deactivated successfully. Oct 30 23:58:47.365015 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 23:58:47.365833 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Oct 30 23:58:47.366873 systemd-logind[1507]: Removed session 8. Oct 30 23:58:49.053988 containerd[1536]: time="2025-10-30T23:58:49.053921129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nqdgd,Uid:b6340a74-f3fe-42f0-a5b5-631bdd75d561,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:49.156126 systemd-networkd[1434]: cali2b77ea09861: Link UP Oct 30 23:58:49.156856 systemd-networkd[1434]: cali2b77ea09861: Gained carrier Oct 30 23:58:49.176637 containerd[1536]: 2025-10-30 23:58:49.090 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nqdgd-eth0 goldmane-666569f655- calico-system b6340a74-f3fe-42f0-a5b5-631bdd75d561 831 0 2025-10-30 23:58:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nqdgd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2b77ea09861 [] [] }} ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-" Oct 30 23:58:49.176637 containerd[1536]: 2025-10-30 23:58:49.090 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.176637 containerd[1536]: 2025-10-30 23:58:49.112 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" HandleID="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Workload="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.112 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" HandleID="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Workload="localhost-k8s-goldmane--666569f655--nqdgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c31c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nqdgd", "timestamp":"2025-10-30 23:58:49.112837604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.113 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.113 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.113 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.124 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" host="localhost" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.128 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.132 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.134 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.137 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:49.176864 containerd[1536]: 2025-10-30 23:58:49.137 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" host="localhost" Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.139 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2 Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.146 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" host="localhost" Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.151 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" host="localhost" Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.151 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" host="localhost" Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.151 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:49.177123 containerd[1536]: 2025-10-30 23:58:49.151 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" HandleID="k8s-pod-network.9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Workload="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.177231 containerd[1536]: 2025-10-30 23:58:49.154 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nqdgd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6340a74-f3fe-42f0-a5b5-631bdd75d561", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nqdgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b77ea09861", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:49.177231 containerd[1536]: 2025-10-30 23:58:49.154 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.177300 containerd[1536]: 2025-10-30 23:58:49.154 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b77ea09861 ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.177300 containerd[1536]: 2025-10-30 23:58:49.157 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.177338 containerd[1536]: 2025-10-30 23:58:49.157 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nqdgd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b6340a74-f3fe-42f0-a5b5-631bdd75d561", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2", Pod:"goldmane-666569f655-nqdgd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2b77ea09861", MAC:"06:b3:c2:0d:c7:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:49.177403 containerd[1536]: 2025-10-30 23:58:49.172 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" Namespace="calico-system" Pod="goldmane-666569f655-nqdgd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nqdgd-eth0" Oct 30 23:58:49.222658 containerd[1536]: time="2025-10-30T23:58:49.222614409Z" level=info msg="connecting to shim 9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2" address="unix:///run/containerd/s/4e504131716bd86a544f10e805db1a70ecc5d7800f82ad9659a88c65ad7c74a8" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:49.249609 systemd[1]: Started cri-containerd-9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2.scope - libcontainer container 9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2. Oct 30 23:58:49.263615 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:49.289186 containerd[1536]: time="2025-10-30T23:58:49.289145306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nqdgd,Uid:b6340a74-f3fe-42f0-a5b5-631bdd75d561,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e13bb28f38b8bfbc70335e5f0f413b10510add7c533763d12ed6c712c3a88b2\"" Oct 30 23:58:49.292408 containerd[1536]: time="2025-10-30T23:58:49.291905848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 23:58:49.519894 containerd[1536]: time="2025-10-30T23:58:49.519763245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:49.521583 containerd[1536]: time="2025-10-30T23:58:49.521376898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 23:58:49.521659 containerd[1536]: time="2025-10-30T23:58:49.521471299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 23:58:49.521827 kubelet[2674]: E1030 23:58:49.521789 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 23:58:49.522433 kubelet[2674]: E1030 23:58:49.521841 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 23:58:49.522433 kubelet[2674]: E1030 23:58:49.521991 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hj8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nqdgd_calico-system(b6340a74-f3fe-42f0-a5b5-631bdd75d561): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:49.523171 kubelet[2674]: E1030 23:58:49.523137 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:58:50.053722 containerd[1536]: time="2025-10-30T23:58:50.053678021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797789cc7-9n2z5,Uid:09999d1d-585b-4b92-bfcd-e8302c7a6bdf,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:50.164637 systemd-networkd[1434]: calic1952904cdf: Link UP Oct 30 23:58:50.164916 systemd-networkd[1434]: calic1952904cdf: Gained carrier Oct 30 23:58:50.187403 containerd[1536]: 2025-10-30 23:58:50.095 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0 calico-kube-controllers-7797789cc7- calico-system 09999d1d-585b-4b92-bfcd-e8302c7a6bdf 819 0 2025-10-30 23:58:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7797789cc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7797789cc7-9n2z5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1952904cdf [] [] }} ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-" Oct 30 23:58:50.187403 containerd[1536]: 2025-10-30 23:58:50.095 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.187403 containerd[1536]: 2025-10-30 23:58:50.119 [INFO][4381] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" HandleID="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Workload="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.119 [INFO][4381] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" HandleID="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Workload="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7797789cc7-9n2z5", "timestamp":"2025-10-30 23:58:50.119088937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.119 [INFO][4381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.119 [INFO][4381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.119 [INFO][4381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.130 [INFO][4381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" host="localhost" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.136 [INFO][4381] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.142 [INFO][4381] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.144 [INFO][4381] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.147 [INFO][4381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:50.187805 containerd[1536]: 2025-10-30 23:58:50.147 [INFO][4381] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" host="localhost" Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.149 [INFO][4381] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75 Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.153 [INFO][4381] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" host="localhost" Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.159 [INFO][4381] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" host="localhost" Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.159 [INFO][4381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" host="localhost" Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.159 [INFO][4381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:50.187997 containerd[1536]: 2025-10-30 23:58:50.159 [INFO][4381] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" HandleID="k8s-pod-network.de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Workload="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.188104 containerd[1536]: 2025-10-30 23:58:50.162 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0", GenerateName:"calico-kube-controllers-7797789cc7-", Namespace:"calico-system", SelfLink:"", UID:"09999d1d-585b-4b92-bfcd-e8302c7a6bdf", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7797789cc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7797789cc7-9n2z5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1952904cdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:50.188150 containerd[1536]: 2025-10-30 23:58:50.162 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.188150 containerd[1536]: 2025-10-30 23:58:50.162 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1952904cdf ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.188150 containerd[1536]: 2025-10-30 23:58:50.165 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.188206 containerd[1536]: 2025-10-30 23:58:50.165 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0", GenerateName:"calico-kube-controllers-7797789cc7-", Namespace:"calico-system", SelfLink:"", UID:"09999d1d-585b-4b92-bfcd-e8302c7a6bdf", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7797789cc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75", Pod:"calico-kube-controllers-7797789cc7-9n2z5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1952904cdf", MAC:"be:e7:44:17:5f:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:50.188254 containerd[1536]: 2025-10-30 23:58:50.179 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" Namespace="calico-system" Pod="calico-kube-controllers-7797789cc7-9n2z5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7797789cc7--9n2z5-eth0" Oct 30 23:58:50.199566 kubelet[2674]: E1030 23:58:50.199521 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:58:50.236179 containerd[1536]: time="2025-10-30T23:58:50.236076859Z" level=info msg="connecting to shim de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75" address="unix:///run/containerd/s/b02c904be8bda4c018e559e1cd6e6eb5fc2967f2ac724bed8fd0ad560bc1b419" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:50.265589 systemd[1]: Started cri-containerd-de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75.scope - libcontainer container de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75. Oct 30 23:58:50.278039 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:50.311067 containerd[1536]: time="2025-10-30T23:58:50.310956809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7797789cc7-9n2z5,Uid:09999d1d-585b-4b92-bfcd-e8302c7a6bdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"de69ebb78a0c24218d218f0b4a4f34acb48ea394fac10c83ba51614d69054b75\"" Oct 30 23:58:50.313680 containerd[1536]: time="2025-10-30T23:58:50.313649750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 23:58:50.521757 containerd[1536]: time="2025-10-30T23:58:50.521707150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:50.522618 containerd[1536]: time="2025-10-30T23:58:50.522585037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 23:58:50.522688 containerd[1536]: time="2025-10-30T23:58:50.522614037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 23:58:50.522869 kubelet[2674]: E1030 23:58:50.522814 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 23:58:50.523131 kubelet[2674]: E1030 23:58:50.522866 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 23:58:50.529563 kubelet[2674]: E1030 23:58:50.522988 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkz6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797789cc7-9n2z5_calico-system(09999d1d-585b-4b92-bfcd-e8302c7a6bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:50.530735 kubelet[2674]: E1030 23:58:50.530688 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:58:51.054138 containerd[1536]: time="2025-10-30T23:58:51.054088416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqbjx,Uid:c1660b4c-0b9c-4abb-b46f-835991a79c24,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:51.055857 containerd[1536]: time="2025-10-30T23:58:51.055803870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8z8d7,Uid:96333c0e-7c74-4fa2-93be-d0ba59addf64,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:51.187589 systemd-networkd[1434]: cali2b77ea09861: Gained IPv6LL Oct 30 23:58:51.202645 kubelet[2674]: E1030 23:58:51.202596 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:58:51.202860 kubelet[2674]: E1030 23:58:51.202645 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:58:51.212581 systemd-networkd[1434]: cali291f0f8cb6e: Link UP Oct 30 23:58:51.214515 systemd-networkd[1434]: cali291f0f8cb6e: Gained carrier Oct 30 23:58:51.228611 containerd[1536]: 2025-10-30 23:58:51.120 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0 calico-apiserver-7bf97986c9- calico-apiserver 96333c0e-7c74-4fa2-93be-d0ba59addf64 827 0 2025-10-30 23:58:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bf97986c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bf97986c9-8z8d7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali291f0f8cb6e [] [] }} ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-" Oct 30 23:58:51.228611 containerd[1536]: 2025-10-30 23:58:51.121 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.228611 containerd[1536]: 2025-10-30 23:58:51.151 [INFO][4474] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" HandleID="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.151 [INFO][4474] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" HandleID="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bf97986c9-8z8d7", "timestamp":"2025-10-30 23:58:51.15180341 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.151 [INFO][4474] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.152 [INFO][4474] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.152 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.163 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" host="localhost" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.168 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.173 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.175 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.177 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:51.229157 containerd[1536]: 2025-10-30 23:58:51.177 [INFO][4474] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" host="localhost" Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.178 [INFO][4474] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5 Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.195 [INFO][4474] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" host="localhost" Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.203 [INFO][4474] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" host="localhost" Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.203 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" host="localhost" Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.203 [INFO][4474] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:51.229442 containerd[1536]: 2025-10-30 23:58:51.204 [INFO][4474] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" HandleID="k8s-pod-network.a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.229839 containerd[1536]: 2025-10-30 23:58:51.207 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0", GenerateName:"calico-apiserver-7bf97986c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"96333c0e-7c74-4fa2-93be-d0ba59addf64", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bf97986c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bf97986c9-8z8d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali291f0f8cb6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:51.229994 containerd[1536]: 2025-10-30 23:58:51.207 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.229994 containerd[1536]: 2025-10-30 23:58:51.207 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali291f0f8cb6e ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.229994 containerd[1536]: 2025-10-30 23:58:51.215 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.230091 containerd[1536]: 2025-10-30 23:58:51.216 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0", GenerateName:"calico-apiserver-7bf97986c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"96333c0e-7c74-4fa2-93be-d0ba59addf64", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bf97986c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5", Pod:"calico-apiserver-7bf97986c9-8z8d7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali291f0f8cb6e", MAC:"b2:2c:24:db:b1:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:51.230164 containerd[1536]: 2025-10-30 23:58:51.225 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8z8d7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8z8d7-eth0" Oct 30 23:58:51.256042 containerd[1536]: time="2025-10-30T23:58:51.255563930Z" level=info msg="connecting to shim a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5" address="unix:///run/containerd/s/f84c15f409705164d7c24f1d3d59ced586017e5de66cf0caaf51dfe3b2558fe0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:51.289589 systemd[1]: Started cri-containerd-a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5.scope - libcontainer container a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5. Oct 30 23:58:51.302711 systemd-networkd[1434]: cali4e01decb85c: Link UP Oct 30 23:58:51.303340 systemd-networkd[1434]: cali4e01decb85c: Gained carrier Oct 30 23:58:51.318483 containerd[1536]: 2025-10-30 23:58:51.121 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0 coredns-668d6bf9bc- kube-system c1660b4c-0b9c-4abb-b46f-835991a79c24 825 0 2025-10-30 23:58:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qqbjx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e01decb85c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-" Oct 30 23:58:51.318483 containerd[1536]: 2025-10-30 23:58:51.121 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.318483 containerd[1536]: 2025-10-30 23:58:51.159 [INFO][4476] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" HandleID="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Workload="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.159 [INFO][4476] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" HandleID="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Workload="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qqbjx", "timestamp":"2025-10-30 23:58:51.159406709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.159 [INFO][4476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.204 [INFO][4476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.204 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.265 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" host="localhost" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.271 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.276 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.278 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.281 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:51.318659 containerd[1536]: 2025-10-30 23:58:51.281 [INFO][4476] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" host="localhost" Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.283 [INFO][4476] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543 Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.287 [INFO][4476] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" host="localhost" Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.293 [INFO][4476] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" host="localhost" Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.293 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" host="localhost" Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.293 [INFO][4476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:51.318983 containerd[1536]: 2025-10-30 23:58:51.293 [INFO][4476] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" HandleID="k8s-pod-network.b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Workload="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.319089 containerd[1536]: 2025-10-30 23:58:51.298 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1660b4c-0b9c-4abb-b46f-835991a79c24", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qqbjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e01decb85c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:51.319142 containerd[1536]: 2025-10-30 23:58:51.298 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.319142 containerd[1536]: 2025-10-30 23:58:51.298 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e01decb85c ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.319142 containerd[1536]: 2025-10-30 23:58:51.303 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.319203 containerd[1536]: 2025-10-30 23:58:51.304 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c1660b4c-0b9c-4abb-b46f-835991a79c24", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543", Pod:"coredns-668d6bf9bc-qqbjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e01decb85c", MAC:"82:a8:27:5c:5d:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:51.319203 containerd[1536]: 2025-10-30 23:58:51.314 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" Namespace="kube-system" Pod="coredns-668d6bf9bc-qqbjx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qqbjx-eth0" Oct 30 23:58:51.320899 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:51.338035 containerd[1536]: time="2025-10-30T23:58:51.337997725Z" level=info msg="connecting to shim b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543" address="unix:///run/containerd/s/5cdedf3b8d8db054636d747a8a94c63bf0ecbba91c326293897233ff064377cb" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:51.361103 containerd[1536]: time="2025-10-30T23:58:51.361062503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8z8d7,Uid:96333c0e-7c74-4fa2-93be-d0ba59addf64,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2ad96826c9b987d1fa4f47c8099bf55fa35844af74b922e6c7088b17c968de5\"" Oct 30 23:58:51.362746 containerd[1536]: time="2025-10-30T23:58:51.362670316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:58:51.379556 systemd[1]: Started cri-containerd-b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543.scope - libcontainer container b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543. Oct 30 23:58:51.404233 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:51.425671 containerd[1536]: time="2025-10-30T23:58:51.425631801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qqbjx,Uid:c1660b4c-0b9c-4abb-b46f-835991a79c24,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543\"" Oct 30 23:58:51.429993 containerd[1536]: time="2025-10-30T23:58:51.429955835Z" level=info msg="CreateContainer within sandbox \"b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:58:51.439734 containerd[1536]: time="2025-10-30T23:58:51.439686390Z" level=info msg="Container 77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:51.461004 containerd[1536]: time="2025-10-30T23:58:51.460932553Z" level=info msg="CreateContainer within sandbox \"b3591be0c833a7260797520936972bbc8249c232129a1c08426f6f64c1bb1543\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374\"" Oct 30 23:58:51.461735 containerd[1536]: time="2025-10-30T23:58:51.461491558Z" level=info msg="StartContainer for \"77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374\"" Oct 30 23:58:51.463614 containerd[1536]: time="2025-10-30T23:58:51.463590694Z" level=info msg="connecting to shim 77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374" address="unix:///run/containerd/s/5cdedf3b8d8db054636d747a8a94c63bf0ecbba91c326293897233ff064377cb" protocol=ttrpc version=3 Oct 30 23:58:51.492595 systemd[1]: Started cri-containerd-77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374.scope - libcontainer container 77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374. Oct 30 23:58:51.519360 containerd[1536]: time="2025-10-30T23:58:51.519317604Z" level=info msg="StartContainer for \"77d13829d703f5100ec6ed669bb125ec7e2e2ec9e96246a5f3d17f76a8e8b374\" returns successfully" Oct 30 23:58:51.563731 containerd[1536]: time="2025-10-30T23:58:51.563688666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:51.569254 containerd[1536]: time="2025-10-30T23:58:51.569153508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:58:51.569497 containerd[1536]: time="2025-10-30T23:58:51.569205028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:58:51.569719 kubelet[2674]: E1030 23:58:51.569679 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:51.574001 kubelet[2674]: E1030 23:58:51.573955 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:51.574178 kubelet[2674]: E1030 23:58:51.574135 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qdbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bf97986c9-8z8d7_calico-apiserver(96333c0e-7c74-4fa2-93be-d0ba59addf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:51.575782 kubelet[2674]: E1030 23:58:51.575438 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:58:51.635542 systemd-networkd[1434]: calic1952904cdf: Gained IPv6LL Oct 30 23:58:52.061676 containerd[1536]: time="2025-10-30T23:58:52.061636735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pq2k,Uid:b6ce2bdb-00f7-424e-b2d2-474513bfd59b,Namespace:kube-system,Attempt:0,}" Oct 30 23:58:52.157490 systemd-networkd[1434]: cali692cbc9c03f: Link UP Oct 30 23:58:52.157700 systemd-networkd[1434]: cali692cbc9c03f: Gained carrier Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.093 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0 coredns-668d6bf9bc- kube-system b6ce2bdb-00f7-424e-b2d2-474513bfd59b 826 0 2025-10-30 23:58:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-9pq2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali692cbc9c03f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.093 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.117 [INFO][4651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" HandleID="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Workload="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.117 [INFO][4651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" HandleID="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Workload="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-9pq2k", "timestamp":"2025-10-30 23:58:52.117674998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.117 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.117 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.117 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.128 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.132 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.136 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.138 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.140 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.140 [INFO][4651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.141 [INFO][4651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.144 [INFO][4651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.150 [INFO][4651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.150 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" host="localhost" Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.150 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:52.172938 containerd[1536]: 2025-10-30 23:58:52.151 [INFO][4651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" HandleID="k8s-pod-network.c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Workload="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.153 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6ce2bdb-00f7-424e-b2d2-474513bfd59b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-9pq2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692cbc9c03f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.153 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.153 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali692cbc9c03f ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.156 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.157 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b6ce2bdb-00f7-424e-b2d2-474513bfd59b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b", Pod:"coredns-668d6bf9bc-9pq2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali692cbc9c03f", MAC:"2a:6c:0f:b2:2c:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:52.174436 containerd[1536]: 2025-10-30 23:58:52.169 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" Namespace="kube-system" Pod="coredns-668d6bf9bc-9pq2k" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--9pq2k-eth0" Oct 30 23:58:52.192691 containerd[1536]: time="2025-10-30T23:58:52.192559644Z" level=info msg="connecting to shim c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b" address="unix:///run/containerd/s/affa8faf9aa803094fcb2b38366e541ceca45592b11b05ea388760caf6bd3e44" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:52.216561 systemd[1]: Started cri-containerd-c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b.scope - libcontainer container c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b. Oct 30 23:58:52.223642 kubelet[2674]: E1030 23:58:52.223505 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:58:52.224605 kubelet[2674]: E1030 23:58:52.224550 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:58:52.239520 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:52.246468 kubelet[2674]: I1030 23:58:52.246401 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qqbjx" podStartSLOduration=40.24636993 podStartE2EDuration="40.24636993s" podCreationTimestamp="2025-10-30 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:52.228174352 +0000 UTC m=+45.264723128" watchObservedRunningTime="2025-10-30 23:58:52.24636993 +0000 UTC m=+45.282918626" Oct 30 23:58:52.254756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176488238.mount: Deactivated successfully. Oct 30 23:58:52.279661 containerd[1536]: time="2025-10-30T23:58:52.279548460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pq2k,Uid:b6ce2bdb-00f7-424e-b2d2-474513bfd59b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b\"" Oct 30 23:58:52.293315 containerd[1536]: time="2025-10-30T23:58:52.293279604Z" level=info msg="CreateContainer within sandbox \"c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 23:58:52.302962 containerd[1536]: time="2025-10-30T23:58:52.302919437Z" level=info msg="Container 78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691: CDI devices from CRI Config.CDIDevices: []" Oct 30 23:58:52.308102 containerd[1536]: time="2025-10-30T23:58:52.308054556Z" level=info msg="CreateContainer within sandbox \"c280ceb9501ba5456ee3556abc899609b3b2046da07fbe698a16dc1e0de0984b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691\"" Oct 30 23:58:52.308719 containerd[1536]: time="2025-10-30T23:58:52.308678560Z" level=info msg="StartContainer for \"78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691\"" Oct 30 23:58:52.309878 containerd[1536]: time="2025-10-30T23:58:52.309772529Z" level=info msg="connecting to shim 78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691" address="unix:///run/containerd/s/affa8faf9aa803094fcb2b38366e541ceca45592b11b05ea388760caf6bd3e44" protocol=ttrpc version=3 Oct 30 23:58:52.329559 systemd[1]: Started cri-containerd-78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691.scope - libcontainer container 78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691. Oct 30 23:58:52.355982 containerd[1536]: time="2025-10-30T23:58:52.355948277Z" level=info msg="StartContainer for \"78fc2d6faae427059ff56bfd4b11bb55d25889a8352ec5f7bd6564790f42d691\" returns successfully" Oct 30 23:58:52.373285 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:37436.service - OpenSSH per-connection server daemon (10.0.0.1:37436). Oct 30 23:58:52.429145 sshd[4755]: Accepted publickey for core from 10.0.0.1 port 37436 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:58:52.430569 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:58:52.435452 systemd-logind[1507]: New session 9 of user core. Oct 30 23:58:52.443538 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 23:58:52.598931 sshd[4760]: Connection closed by 10.0.0.1 port 37436 Oct 30 23:58:52.599915 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:52.604007 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:37436.service: Deactivated successfully. Oct 30 23:58:52.605783 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 23:58:52.606552 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Oct 30 23:58:52.607697 systemd-logind[1507]: Removed session 9. Oct 30 23:58:52.659515 systemd-networkd[1434]: cali4e01decb85c: Gained IPv6LL Oct 30 23:58:52.787545 systemd-networkd[1434]: cali291f0f8cb6e: Gained IPv6LL Oct 30 23:58:53.220191 kubelet[2674]: E1030 23:58:53.219893 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:58:53.231292 kubelet[2674]: I1030 23:58:53.231179 2674 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9pq2k" podStartSLOduration=41.23116261 podStartE2EDuration="41.23116261s" podCreationTimestamp="2025-10-30 23:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 23:58:53.230160203 +0000 UTC m=+46.266708939" watchObservedRunningTime="2025-10-30 23:58:53.23116261 +0000 UTC m=+46.267711346" Oct 30 23:58:54.053903 containerd[1536]: time="2025-10-30T23:58:54.053829810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777569f766-kspc6,Uid:e9077330-c4c7-4bab-b6ff-e93888d2d752,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:54.054252 containerd[1536]: time="2025-10-30T23:58:54.053833570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8575b,Uid:af40a0eb-fde7-43c7-a00e-4d438056d56c,Namespace:calico-apiserver,Attempt:0,}" Oct 30 23:58:54.195602 systemd-networkd[1434]: cali692cbc9c03f: Gained IPv6LL Oct 30 23:58:54.226170 systemd-networkd[1434]: cali1986c363e34: Link UP Oct 30 23:58:54.226798 systemd-networkd[1434]: cali1986c363e34: Gained carrier Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.133 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0 calico-apiserver-7bf97986c9- calico-apiserver af40a0eb-fde7-43c7-a00e-4d438056d56c 829 0 2025-10-30 23:58:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bf97986c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bf97986c9-8575b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1986c363e34 [] [] }} ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.133 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.171 [INFO][4813] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" HandleID="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.171 [INFO][4813] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" HandleID="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118d20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bf97986c9-8575b", "timestamp":"2025-10-30 23:58:54.171621585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.171 [INFO][4813] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.171 [INFO][4813] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.171 [INFO][4813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.183 [INFO][4813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.188 [INFO][4813] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.197 [INFO][4813] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.200 [INFO][4813] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.205 [INFO][4813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.205 [INFO][4813] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.207 [INFO][4813] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73 Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.211 [INFO][4813] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4813] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" host="localhost" Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4813] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:54.245130 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4813] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" HandleID="k8s-pod-network.5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Workload="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.222 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0", GenerateName:"calico-apiserver-7bf97986c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"af40a0eb-fde7-43c7-a00e-4d438056d56c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bf97986c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bf97986c9-8575b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1986c363e34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.223 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.223 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1986c363e34 ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.227 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.229 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0", GenerateName:"calico-apiserver-7bf97986c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"af40a0eb-fde7-43c7-a00e-4d438056d56c", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bf97986c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73", Pod:"calico-apiserver-7bf97986c9-8575b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1986c363e34", MAC:"62:31:49:65:61:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:54.246098 containerd[1536]: 2025-10-30 23:58:54.239 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" Namespace="calico-apiserver" Pod="calico-apiserver-7bf97986c9-8575b" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bf97986c9--8575b-eth0" Oct 30 23:58:54.320859 containerd[1536]: time="2025-10-30T23:58:54.320725388Z" level=info msg="connecting to shim 5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73" address="unix:///run/containerd/s/a73303671933f37ad5bc68b0422a18dbf9310368def1fa51a236f0f29aba5822" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:54.339330 systemd-networkd[1434]: cali1119ace1e07: Link UP Oct 30 23:58:54.340079 systemd-networkd[1434]: cali1119ace1e07: Gained carrier Oct 30 23:58:54.357618 systemd[1]: Started cri-containerd-5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73.scope - libcontainer container 5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73. Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.154 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--777569f766--kspc6-eth0 calico-apiserver-777569f766- calico-apiserver e9077330-c4c7-4bab-b6ff-e93888d2d752 830 0 2025-10-30 23:58:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777569f766 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-777569f766-kspc6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1119ace1e07 [] [] }} ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.154 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.191 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" HandleID="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Workload="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.191 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" HandleID="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Workload="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-777569f766-kspc6", "timestamp":"2025-10-30 23:58:54.191675891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.191 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.219 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.283 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.291 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.302 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.310 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.314 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.315 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.317 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.326 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.335 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.335 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" host="localhost" Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.335 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:54.360663 containerd[1536]: 2025-10-30 23:58:54.335 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" HandleID="k8s-pod-network.d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Workload="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.337 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--777569f766--kspc6-eth0", GenerateName:"calico-apiserver-777569f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9077330-c4c7-4bab-b6ff-e93888d2d752", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777569f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-777569f766-kspc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1119ace1e07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.337 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.337 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1119ace1e07 ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.339 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.340 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--777569f766--kspc6-eth0", GenerateName:"calico-apiserver-777569f766-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9077330-c4c7-4bab-b6ff-e93888d2d752", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777569f766", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd", Pod:"calico-apiserver-777569f766-kspc6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1119ace1e07", MAC:"b2:88:4e:5d:9b:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:54.361223 containerd[1536]: 2025-10-30 23:58:54.354 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" Namespace="calico-apiserver" Pod="calico-apiserver-777569f766-kspc6" WorkloadEndpoint="localhost-k8s-calico--apiserver--777569f766--kspc6-eth0" Oct 30 23:58:54.380649 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:54.387265 containerd[1536]: time="2025-10-30T23:58:54.387175630Z" level=info msg="connecting to shim d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd" address="unix:///run/containerd/s/a455ef2f169f405b45699ed8e2e7607f15e67fee340fb24bb09b59c732155a7a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:54.409576 systemd[1]: Started cri-containerd-d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd.scope - libcontainer container d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd. Oct 30 23:58:54.415534 containerd[1536]: time="2025-10-30T23:58:54.415493355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bf97986c9-8575b,Uid:af40a0eb-fde7-43c7-a00e-4d438056d56c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e92fffbd8dcc2c8e4417f5868c715f995f6d6717b04760b936deaa583ee8d73\"" Oct 30 23:58:54.417884 containerd[1536]: time="2025-10-30T23:58:54.417859173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:58:54.425842 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:54.450532 containerd[1536]: time="2025-10-30T23:58:54.450489729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777569f766-kspc6,Uid:e9077330-c4c7-4bab-b6ff-e93888d2d752,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d5f0e935abc6b790e25d74e93e2ffa4d0e8b22fce09104b5d08af8ed63b502cd\"" Oct 30 23:58:54.605298 containerd[1536]: time="2025-10-30T23:58:54.605174012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:54.607595 containerd[1536]: time="2025-10-30T23:58:54.607551030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:58:54.607691 containerd[1536]: time="2025-10-30T23:58:54.607604590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:58:54.607840 kubelet[2674]: E1030 23:58:54.607803 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:54.608585 kubelet[2674]: E1030 23:58:54.607853 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:54.608585 kubelet[2674]: E1030 23:58:54.608059 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85jtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bf97986c9-8575b_calico-apiserver(af40a0eb-fde7-43c7-a00e-4d438056d56c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:54.608695 containerd[1536]: time="2025-10-30T23:58:54.608167954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:58:54.609979 kubelet[2674]: E1030 23:58:54.609927 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:58:54.816260 containerd[1536]: time="2025-10-30T23:58:54.816148624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:54.817217 containerd[1536]: time="2025-10-30T23:58:54.817161991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:58:54.817339 containerd[1536]: time="2025-10-30T23:58:54.817244232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:58:54.817424 kubelet[2674]: E1030 23:58:54.817361 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:54.817485 kubelet[2674]: E1030 23:58:54.817424 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:58:54.817760 kubelet[2674]: E1030 23:58:54.817569 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btg9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-777569f766-kspc6_calico-apiserver(e9077330-c4c7-4bab-b6ff-e93888d2d752): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:54.818792 kubelet[2674]: E1030 23:58:54.818735 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:58:55.056785 containerd[1536]: time="2025-10-30T23:58:55.056709882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kc6vc,Uid:7aeb131b-e092-42f8-a46c-9c20bc9f295e,Namespace:calico-system,Attempt:0,}" Oct 30 23:58:55.160132 systemd-networkd[1434]: cali28c8d6999c6: Link UP Oct 30 23:58:55.161457 systemd-networkd[1434]: cali28c8d6999c6: Gained carrier Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.092 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kc6vc-eth0 csi-node-driver- calico-system 7aeb131b-e092-42f8-a46c-9c20bc9f295e 717 0 2025-10-30 23:58:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kc6vc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28c8d6999c6 [] [] }} ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.093 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.117 [INFO][4960] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" HandleID="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Workload="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.117 [INFO][4960] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" HandleID="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Workload="localhost-k8s-csi--node--driver--kc6vc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kc6vc", "timestamp":"2025-10-30 23:58:55.117200633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.117 [INFO][4960] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.117 [INFO][4960] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.117 [INFO][4960] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.127 [INFO][4960] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.133 [INFO][4960] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.138 [INFO][4960] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.139 [INFO][4960] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.142 [INFO][4960] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.143 [INFO][4960] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.144 [INFO][4960] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459 Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.147 [INFO][4960] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.155 [INFO][4960] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.155 [INFO][4960] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" host="localhost" Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.155 [INFO][4960] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 23:58:55.172980 containerd[1536]: 2025-10-30 23:58:55.155 [INFO][4960] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" HandleID="k8s-pod-network.91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Workload="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.157 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kc6vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7aeb131b-e092-42f8-a46c-9c20bc9f295e", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kc6vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c8d6999c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.157 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.157 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28c8d6999c6 ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.161 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.162 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kc6vc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7aeb131b-e092-42f8-a46c-9c20bc9f295e", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 23, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459", Pod:"csi-node-driver-kc6vc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28c8d6999c6", MAC:"3a:94:71:7d:ce:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 23:58:55.175143 containerd[1536]: 2025-10-30 23:58:55.170 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" Namespace="calico-system" Pod="csi-node-driver-kc6vc" WorkloadEndpoint="localhost-k8s-csi--node--driver--kc6vc-eth0" Oct 30 23:58:55.200769 containerd[1536]: time="2025-10-30T23:58:55.200651548Z" level=info msg="connecting to shim 91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459" address="unix:///run/containerd/s/f01acdcdde1b0a466b88436849ee56232b9305dd9202970396664c17b981a321" namespace=k8s.io protocol=ttrpc version=3 Oct 30 23:58:55.231083 kubelet[2674]: E1030 23:58:55.230120 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:58:55.232538 kubelet[2674]: E1030 23:58:55.232436 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:58:55.235579 systemd[1]: Started cri-containerd-91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459.scope - libcontainer container 91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459. Oct 30 23:58:55.256246 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 23:58:55.288193 containerd[1536]: time="2025-10-30T23:58:55.288156692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kc6vc,Uid:7aeb131b-e092-42f8-a46c-9c20bc9f295e,Namespace:calico-system,Attempt:0,} returns sandbox id \"91a48187ba578d794371e60f065789130038461afa964b13ac6bf775070e4459\"" Oct 30 23:58:55.291359 containerd[1536]: time="2025-10-30T23:58:55.291323074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 23:58:55.524645 containerd[1536]: time="2025-10-30T23:58:55.523840371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:55.525208 containerd[1536]: time="2025-10-30T23:58:55.525147101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 23:58:55.525340 containerd[1536]: time="2025-10-30T23:58:55.525189541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 23:58:55.525546 kubelet[2674]: E1030 23:58:55.525497 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 23:58:55.525592 kubelet[2674]: E1030 23:58:55.525553 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 23:58:55.525741 kubelet[2674]: E1030 23:58:55.525704 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:55.527654 containerd[1536]: time="2025-10-30T23:58:55.527631398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 23:58:55.667537 systemd-networkd[1434]: cali1119ace1e07: Gained IPv6LL Oct 30 23:58:55.744152 containerd[1536]: time="2025-10-30T23:58:55.744098101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:55.745057 containerd[1536]: time="2025-10-30T23:58:55.745003547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 23:58:55.745110 containerd[1536]: time="2025-10-30T23:58:55.745056748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 23:58:55.745282 kubelet[2674]: E1030 23:58:55.745235 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 23:58:55.745649 kubelet[2674]: E1030 23:58:55.745283 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 23:58:55.745649 kubelet[2674]: E1030 23:58:55.745411 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:55.746588 kubelet[2674]: E1030 23:58:55.746536 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:56.056840 containerd[1536]: time="2025-10-30T23:58:56.056791682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 23:58:56.115545 systemd-networkd[1434]: cali1986c363e34: Gained IPv6LL Oct 30 23:58:56.233135 kubelet[2674]: E1030 23:58:56.233095 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:58:56.233785 kubelet[2674]: E1030 23:58:56.233747 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:56.234106 kubelet[2674]: E1030 23:58:56.234058 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:58:56.256063 containerd[1536]: time="2025-10-30T23:58:56.256016438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:56.257331 containerd[1536]: time="2025-10-30T23:58:56.257286006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 23:58:56.257890 containerd[1536]: time="2025-10-30T23:58:56.257360647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 23:58:56.257934 kubelet[2674]: E1030 23:58:56.257476 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:58:56.257934 kubelet[2674]: E1030 23:58:56.257525 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:58:56.257934 kubelet[2674]: E1030 23:58:56.257685 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:383267d036d54bedba55c947e586c654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:56.259556 containerd[1536]: time="2025-10-30T23:58:56.259524622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 23:58:56.307509 systemd-networkd[1434]: cali28c8d6999c6: Gained IPv6LL Oct 30 23:58:56.459152 containerd[1536]: time="2025-10-30T23:58:56.459089140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:58:56.460160 containerd[1536]: time="2025-10-30T23:58:56.460121987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 23:58:56.460225 containerd[1536]: time="2025-10-30T23:58:56.460189867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 23:58:56.460377 kubelet[2674]: E1030 23:58:56.460334 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:58:56.460460 kubelet[2674]: E1030 23:58:56.460403 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:58:56.460561 kubelet[2674]: E1030 23:58:56.460524 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 23:58:56.461718 kubelet[2674]: E1030 23:58:56.461674 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64784994b5-czbfj" podUID="31db6031-8469-4cd5-9b2b-86eba32dc2d4" Oct 30 23:58:57.242933 kubelet[2674]: E1030 23:58:57.242856 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:58:57.613422 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:37446.service - OpenSSH per-connection server daemon (10.0.0.1:37446). Oct 30 23:58:57.663679 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 37446 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:58:57.665466 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:58:57.669490 systemd-logind[1507]: New session 10 of user core. Oct 30 23:58:57.684584 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 23:58:57.880059 sshd[5034]: Connection closed by 10.0.0.1 port 37446 Oct 30 23:58:57.880376 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:57.892752 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:37446.service: Deactivated successfully. Oct 30 23:58:57.894673 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 23:58:57.895496 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Oct 30 23:58:57.899039 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:37458.service - OpenSSH per-connection server daemon (10.0.0.1:37458). Oct 30 23:58:57.900460 systemd-logind[1507]: Removed session 10. Oct 30 23:58:57.950280 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 37458 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:58:57.951780 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:58:57.956340 systemd-logind[1507]: New session 11 of user core. Oct 30 23:58:57.965603 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 23:58:58.137214 sshd[5051]: Connection closed by 10.0.0.1 port 37458 Oct 30 23:58:58.137728 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:58.147863 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:37458.service: Deactivated successfully. Oct 30 23:58:58.149661 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 23:58:58.151413 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Oct 30 23:58:58.161900 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:37468.service - OpenSSH per-connection server daemon (10.0.0.1:37468). Oct 30 23:58:58.169465 systemd-logind[1507]: Removed session 11. Oct 30 23:58:58.220547 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:58:58.221855 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:58:58.226169 systemd-logind[1507]: New session 12 of user core. Oct 30 23:58:58.239583 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 23:58:58.374676 sshd[5065]: Connection closed by 10.0.0.1 port 37468 Oct 30 23:58:58.374123 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Oct 30 23:58:58.378123 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:37468.service: Deactivated successfully. Oct 30 23:58:58.380941 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 23:58:58.381921 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Oct 30 23:58:58.383396 systemd-logind[1507]: Removed session 12. Oct 30 23:59:03.392804 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:39494.service - OpenSSH per-connection server daemon (10.0.0.1:39494). Oct 30 23:59:03.459905 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 39494 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:03.460812 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:03.464961 systemd-logind[1507]: New session 13 of user core. Oct 30 23:59:03.477139 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 23:59:03.619317 sshd[5092]: Connection closed by 10.0.0.1 port 39494 Oct 30 23:59:03.619732 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:03.631492 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:39494.service: Deactivated successfully. Oct 30 23:59:03.634795 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 23:59:03.641321 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Oct 30 23:59:03.644704 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:39502.service - OpenSSH per-connection server daemon (10.0.0.1:39502). Oct 30 23:59:03.646262 systemd-logind[1507]: Removed session 13. Oct 30 23:59:03.695686 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 39502 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:03.699417 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:03.704444 systemd-logind[1507]: New session 14 of user core. Oct 30 23:59:03.709557 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 23:59:03.922504 sshd[5110]: Connection closed by 10.0.0.1 port 39502 Oct 30 23:59:03.922411 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:03.936312 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:39502.service: Deactivated successfully. Oct 30 23:59:03.938905 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 23:59:03.939718 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Oct 30 23:59:03.942140 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:39518.service - OpenSSH per-connection server daemon (10.0.0.1:39518). Oct 30 23:59:03.943203 systemd-logind[1507]: Removed session 14. Oct 30 23:59:04.003149 sshd[5121]: Accepted publickey for core from 10.0.0.1 port 39518 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:04.005035 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:04.009789 systemd-logind[1507]: New session 15 of user core. Oct 30 23:59:04.020632 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 23:59:04.663165 sshd[5124]: Connection closed by 10.0.0.1 port 39518 Oct 30 23:59:04.664052 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:04.673837 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:39518.service: Deactivated successfully. Oct 30 23:59:04.675860 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 23:59:04.676962 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Oct 30 23:59:04.680371 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:39532.service - OpenSSH per-connection server daemon (10.0.0.1:39532). Oct 30 23:59:04.684023 systemd-logind[1507]: Removed session 15. Oct 30 23:59:04.732976 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 39532 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:04.734308 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:04.738502 systemd-logind[1507]: New session 16 of user core. Oct 30 23:59:04.748628 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 23:59:05.024419 sshd[5146]: Connection closed by 10.0.0.1 port 39532 Oct 30 23:59:05.025160 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:05.035048 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:39532.service: Deactivated successfully. Oct 30 23:59:05.037198 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 23:59:05.039988 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Oct 30 23:59:05.042353 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:39544.service - OpenSSH per-connection server daemon (10.0.0.1:39544). Oct 30 23:59:05.047532 systemd-logind[1507]: Removed session 16. Oct 30 23:59:05.058499 containerd[1536]: time="2025-10-30T23:59:05.058454766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 23:59:05.099296 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 39544 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:05.101071 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:05.105892 systemd-logind[1507]: New session 17 of user core. Oct 30 23:59:05.118610 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 23:59:05.247101 sshd[5161]: Connection closed by 10.0.0.1 port 39544 Oct 30 23:59:05.247613 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:05.252695 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:39544.service: Deactivated successfully. Oct 30 23:59:05.254475 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 23:59:05.257118 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Oct 30 23:59:05.258353 systemd-logind[1507]: Removed session 17. Oct 30 23:59:05.279701 containerd[1536]: time="2025-10-30T23:59:05.279565053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:05.280822 containerd[1536]: time="2025-10-30T23:59:05.280785341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 23:59:05.281054 kubelet[2674]: E1030 23:59:05.281010 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 23:59:05.281367 kubelet[2674]: E1030 23:59:05.281061 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 23:59:05.281367 kubelet[2674]: E1030 23:59:05.281265 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkz6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7797789cc7-9n2z5_calico-system(09999d1d-585b-4b92-bfcd-e8302c7a6bdf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:05.282729 kubelet[2674]: E1030 23:59:05.282701 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:59:05.302314 containerd[1536]: time="2025-10-30T23:59:05.280865341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 23:59:05.302314 containerd[1536]: time="2025-10-30T23:59:05.281351344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:59:05.519590 containerd[1536]: time="2025-10-30T23:59:05.519528297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:05.520545 containerd[1536]: time="2025-10-30T23:59:05.520493383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:59:05.520599 containerd[1536]: time="2025-10-30T23:59:05.520578224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:59:05.520774 kubelet[2674]: E1030 23:59:05.520738 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:05.521397 kubelet[2674]: E1030 23:59:05.520874 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:05.521397 kubelet[2674]: E1030 23:59:05.521004 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qdbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bf97986c9-8z8d7_calico-apiserver(96333c0e-7c74-4fa2-93be-d0ba59addf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:05.523654 kubelet[2674]: E1030 23:59:05.523602 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:59:06.054597 containerd[1536]: time="2025-10-30T23:59:06.054521042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 23:59:06.303172 containerd[1536]: time="2025-10-30T23:59:06.303070843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:06.347980 containerd[1536]: time="2025-10-30T23:59:06.347835437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 23:59:06.347980 containerd[1536]: time="2025-10-30T23:59:06.347905317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 23:59:06.348237 kubelet[2674]: E1030 23:59:06.348190 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 23:59:06.348540 kubelet[2674]: E1030 23:59:06.348242 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 23:59:06.348540 kubelet[2674]: E1030 23:59:06.348406 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hj8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nqdgd_calico-system(b6340a74-f3fe-42f0-a5b5-631bdd75d561): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:06.349962 kubelet[2674]: E1030 23:59:06.349930 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:59:08.055491 containerd[1536]: time="2025-10-30T23:59:08.055452699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 23:59:08.266205 containerd[1536]: time="2025-10-30T23:59:08.266125443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:08.267534 containerd[1536]: time="2025-10-30T23:59:08.267271450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 23:59:08.267606 containerd[1536]: time="2025-10-30T23:59:08.267328210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 23:59:08.267780 kubelet[2674]: E1030 23:59:08.267713 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 23:59:08.267780 kubelet[2674]: E1030 23:59:08.267770 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 23:59:08.268074 kubelet[2674]: E1030 23:59:08.267895 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:08.270303 containerd[1536]: time="2025-10-30T23:59:08.270091027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 23:59:08.486651 containerd[1536]: time="2025-10-30T23:59:08.486534486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:08.495512 containerd[1536]: time="2025-10-30T23:59:08.495378339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 23:59:08.495512 containerd[1536]: time="2025-10-30T23:59:08.495405939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 23:59:08.495680 kubelet[2674]: E1030 23:59:08.495622 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 23:59:08.495758 kubelet[2674]: E1030 23:59:08.495678 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 23:59:08.496160 kubelet[2674]: E1030 23:59:08.495804 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kc6vc_calico-system(7aeb131b-e092-42f8-a46c-9c20bc9f295e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:08.497299 kubelet[2674]: E1030 23:59:08.497251 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:59:09.054911 containerd[1536]: time="2025-10-30T23:59:09.054866773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:59:09.266792 containerd[1536]: time="2025-10-30T23:59:09.266620553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:09.268139 containerd[1536]: time="2025-10-30T23:59:09.268024121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:59:09.268139 containerd[1536]: time="2025-10-30T23:59:09.268107202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:59:09.268308 kubelet[2674]: E1030 23:59:09.268261 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:09.268551 kubelet[2674]: E1030 23:59:09.268317 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:09.268551 kubelet[2674]: E1030 23:59:09.268451 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85jtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bf97986c9-8575b_calico-apiserver(af40a0eb-fde7-43c7-a00e-4d438056d56c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:09.269646 kubelet[2674]: E1030 23:59:09.269590 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:59:10.261889 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:33638.service - OpenSSH per-connection server daemon (10.0.0.1:33638). Oct 30 23:59:10.317708 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 33638 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:10.319631 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:10.324656 systemd-logind[1507]: New session 18 of user core. Oct 30 23:59:10.332573 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 23:59:10.483969 sshd[5181]: Connection closed by 10.0.0.1 port 33638 Oct 30 23:59:10.484292 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:10.487795 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:33638.service: Deactivated successfully. Oct 30 23:59:10.490110 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 23:59:10.491064 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Oct 30 23:59:10.492240 systemd-logind[1507]: Removed session 18. Oct 30 23:59:11.053857 containerd[1536]: time="2025-10-30T23:59:11.053798367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 23:59:11.322112 containerd[1536]: time="2025-10-30T23:59:11.321993656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:11.323049 containerd[1536]: time="2025-10-30T23:59:11.323000382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 23:59:11.323113 containerd[1536]: time="2025-10-30T23:59:11.323091262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 23:59:11.323289 kubelet[2674]: E1030 23:59:11.323243 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:11.323604 kubelet[2674]: E1030 23:59:11.323302 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 23:59:11.323604 kubelet[2674]: E1030 23:59:11.323440 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btg9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-777569f766-kspc6_calico-apiserver(e9077330-c4c7-4bab-b6ff-e93888d2d752): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:11.324733 kubelet[2674]: E1030 23:59:11.324700 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:59:12.055675 kubelet[2674]: E1030 23:59:12.055594 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64784994b5-czbfj" podUID="31db6031-8469-4cd5-9b2b-86eba32dc2d4" Oct 30 23:59:14.260335 containerd[1536]: time="2025-10-30T23:59:14.260289712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04f111326772f9f7182b4a5e882a65eea0a2d8fc8e390cbc8e6243a25d69127c\" id:\"e855dd45bb0844f3a907b1b9a4087a9397c7c2ab306b59ed8f33739b1e372d7a\" pid:5210 exited_at:{seconds:1761868754 nanos:260001951}" Oct 30 23:59:15.495766 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Oct 30 23:59:15.549935 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:15.551160 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:15.554864 systemd-logind[1507]: New session 19 of user core. Oct 30 23:59:15.561554 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 23:59:15.673182 sshd[5228]: Connection closed by 10.0.0.1 port 33644 Oct 30 23:59:15.673040 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:15.676667 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:33644.service: Deactivated successfully. Oct 30 23:59:15.678425 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 23:59:15.679047 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Oct 30 23:59:15.680171 systemd-logind[1507]: Removed session 19. Oct 30 23:59:18.053831 kubelet[2674]: E1030 23:59:18.053789 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8z8d7" podUID="96333c0e-7c74-4fa2-93be-d0ba59addf64" Oct 30 23:59:20.054098 kubelet[2674]: E1030 23:59:20.054039 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nqdgd" podUID="b6340a74-f3fe-42f0-a5b5-631bdd75d561" Oct 30 23:59:20.054098 kubelet[2674]: E1030 23:59:20.054051 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7797789cc7-9n2z5" podUID="09999d1d-585b-4b92-bfcd-e8302c7a6bdf" Oct 30 23:59:20.055209 kubelet[2674]: E1030 23:59:20.054271 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bf97986c9-8575b" podUID="af40a0eb-fde7-43c7-a00e-4d438056d56c" Oct 30 23:59:20.688724 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:46854.service - OpenSSH per-connection server daemon (10.0.0.1:46854). Oct 30 23:59:20.737730 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 46854 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:20.738928 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:20.743909 systemd-logind[1507]: New session 20 of user core. Oct 30 23:59:20.755580 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 23:59:20.889348 sshd[5245]: Connection closed by 10.0.0.1 port 46854 Oct 30 23:59:20.889921 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:20.894018 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:46854.service: Deactivated successfully. Oct 30 23:59:20.895982 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 23:59:20.897801 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Oct 30 23:59:20.899500 systemd-logind[1507]: Removed session 20. Oct 30 23:59:21.056201 kubelet[2674]: E1030 23:59:21.056027 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kc6vc" podUID="7aeb131b-e092-42f8-a46c-9c20bc9f295e" Oct 30 23:59:23.054090 kubelet[2674]: E1030 23:59:23.054029 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-777569f766-kspc6" podUID="e9077330-c4c7-4bab-b6ff-e93888d2d752" Oct 30 23:59:25.901698 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:46868.service - OpenSSH per-connection server daemon (10.0.0.1:46868). Oct 30 23:59:25.958180 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:eMT2yr5isfZFKgr4u+rPHefcjXBjXGBc9p91goGyQfE Oct 30 23:59:25.959288 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 23:59:25.963313 systemd-logind[1507]: New session 21 of user core. Oct 30 23:59:25.972539 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 23:59:26.055960 containerd[1536]: time="2025-10-30T23:59:26.055921561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 23:59:26.090515 sshd[5270]: Connection closed by 10.0.0.1 port 46868 Oct 30 23:59:26.090844 sshd-session[5267]: pam_unix(sshd:session): session closed for user core Oct 30 23:59:26.094644 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:46868.service: Deactivated successfully. Oct 30 23:59:26.096224 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 23:59:26.097177 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Oct 30 23:59:26.098288 systemd-logind[1507]: Removed session 21. Oct 30 23:59:26.276702 containerd[1536]: time="2025-10-30T23:59:26.276262075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:26.277353 containerd[1536]: time="2025-10-30T23:59:26.277315800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 23:59:26.277516 containerd[1536]: time="2025-10-30T23:59:26.277323840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 23:59:26.277625 kubelet[2674]: E1030 23:59:26.277586 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:59:26.277895 kubelet[2674]: E1030 23:59:26.277635 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 23:59:26.278123 kubelet[2674]: E1030 23:59:26.277758 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:383267d036d54bedba55c947e586c654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:26.280064 containerd[1536]: time="2025-10-30T23:59:26.280038854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 23:59:26.518571 containerd[1536]: time="2025-10-30T23:59:26.518425662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 23:59:26.519310 containerd[1536]: time="2025-10-30T23:59:26.519214067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 23:59:26.519310 containerd[1536]: time="2025-10-30T23:59:26.519288547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 23:59:26.519573 kubelet[2674]: E1030 23:59:26.519535 2674 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:59:26.519642 kubelet[2674]: E1030 23:59:26.519584 2674 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 23:59:26.519731 kubelet[2674]: E1030 23:59:26.519696 2674 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64784994b5-czbfj_calico-system(31db6031-8469-4cd5-9b2b-86eba32dc2d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 23:59:26.520944 kubelet[2674]: E1030 23:59:26.520889 2674 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64784994b5-czbfj" podUID="31db6031-8469-4cd5-9b2b-86eba32dc2d4"