Oct 28 23:44:01.801755 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 28 23:44:01.801779 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Oct 28 22:25:10 -00 2025 Oct 28 23:44:01.801789 kernel: KASLR enabled Oct 28 23:44:01.801795 kernel: efi: EFI v2.7 by EDK II Oct 28 23:44:01.801801 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 28 23:44:01.801807 kernel: random: crng init done Oct 28 23:44:01.801814 kernel: secureboot: Secure boot disabled Oct 28 23:44:01.801820 kernel: ACPI: Early table checksum verification disabled Oct 28 23:44:01.801826 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 28 23:44:01.801834 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 28 23:44:01.801840 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801846 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801852 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801859 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801866 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801874 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801881 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801888 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801894 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:44:01.801901 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 28 23:44:01.801907 kernel: ACPI: Use ACPI SPCR as default console: No Oct 28 23:44:01.801914 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:44:01.801920 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 28 23:44:01.801927 kernel: Zone ranges: Oct 28 23:44:01.801933 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:44:01.801941 kernel: DMA32 empty Oct 28 23:44:01.801948 kernel: Normal empty Oct 28 23:44:01.801954 kernel: Device empty Oct 28 23:44:01.801960 kernel: Movable zone start for each node Oct 28 23:44:01.801966 kernel: Early memory node ranges Oct 28 23:44:01.801973 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 28 23:44:01.801979 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 28 23:44:01.801986 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 28 23:44:01.801993 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 28 23:44:01.801999 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 28 23:44:01.802005 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 28 23:44:01.802012 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 28 23:44:01.802020 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 28 23:44:01.802026 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 28 23:44:01.802033 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 28 23:44:01.802043 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 28 23:44:01.802049 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 28 23:44:01.802056 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 28 23:44:01.802064 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:44:01.802071 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 28 23:44:01.802078 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 28 23:44:01.802085 kernel: psci: probing for conduit method from ACPI. Oct 28 23:44:01.802092 kernel: psci: PSCIv1.1 detected in firmware. Oct 28 23:44:01.802099 kernel: psci: Using standard PSCI v0.2 function IDs Oct 28 23:44:01.802106 kernel: psci: Trusted OS migration not required Oct 28 23:44:01.802112 kernel: psci: SMC Calling Convention v1.1 Oct 28 23:44:01.802119 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 28 23:44:01.802126 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 28 23:44:01.802135 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 28 23:44:01.802142 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 28 23:44:01.802163 kernel: Detected PIPT I-cache on CPU0 Oct 28 23:44:01.802170 kernel: CPU features: detected: GIC system register CPU interface Oct 28 23:44:01.802176 kernel: CPU features: detected: Spectre-v4 Oct 28 23:44:01.802183 kernel: CPU features: detected: Spectre-BHB Oct 28 23:44:01.802190 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 28 23:44:01.802197 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 28 23:44:01.802203 kernel: CPU features: detected: ARM erratum 1418040 Oct 28 23:44:01.802210 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 28 23:44:01.802217 kernel: alternatives: applying boot alternatives Oct 28 23:44:01.802224 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2617901133921edac864d90cb956099796bbbbfbc133441a2778ec034c4cf4d9 Oct 28 23:44:01.802233 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 23:44:01.802240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 23:44:01.802247 kernel: Fallback order for Node 0: 0 Oct 28 23:44:01.802254 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 28 23:44:01.802260 kernel: Policy zone: DMA Oct 28 23:44:01.802267 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 23:44:01.802274 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 28 23:44:01.802280 kernel: software IO TLB: area num 4. Oct 28 23:44:01.802287 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 28 23:44:01.802294 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 28 23:44:01.802301 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 23:44:01.802309 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 23:44:01.802316 kernel: rcu: RCU event tracing is enabled. Oct 28 23:44:01.802323 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 23:44:01.802330 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 23:44:01.802337 kernel: Tracing variant of Tasks RCU enabled. Oct 28 23:44:01.802344 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 23:44:01.802351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 23:44:01.802358 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:44:01.802365 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:44:01.802371 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 28 23:44:01.802378 kernel: GICv3: 256 SPIs implemented Oct 28 23:44:01.802386 kernel: GICv3: 0 Extended SPIs implemented Oct 28 23:44:01.802393 kernel: Root IRQ handler: gic_handle_irq Oct 28 23:44:01.802399 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 28 23:44:01.802406 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 28 23:44:01.802413 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 28 23:44:01.802419 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 28 23:44:01.802426 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 28 23:44:01.802433 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 28 23:44:01.802459 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 28 23:44:01.802466 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 28 23:44:01.802473 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 23:44:01.802480 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:44:01.802489 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 28 23:44:01.802496 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 28 23:44:01.802503 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 28 23:44:01.802509 kernel: arm-pv: using stolen time PV Oct 28 23:44:01.802516 kernel: Console: colour dummy device 80x25 Oct 28 23:44:01.802524 kernel: ACPI: Core revision 20240827 Oct 28 23:44:01.802538 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 28 23:44:01.802551 kernel: pid_max: default: 32768 minimum: 301 Oct 28 23:44:01.802558 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 23:44:01.802565 kernel: landlock: Up and running. Oct 28 23:44:01.802573 kernel: SELinux: Initializing. Oct 28 23:44:01.802580 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:44:01.802587 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:44:01.802594 kernel: rcu: Hierarchical SRCU implementation. Oct 28 23:44:01.802602 kernel: rcu: Max phase no-delay instances is 400. Oct 28 23:44:01.802609 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 23:44:01.802624 kernel: Remapping and enabling EFI services. Oct 28 23:44:01.802631 kernel: smp: Bringing up secondary CPUs ... Oct 28 23:44:01.802638 kernel: Detected PIPT I-cache on CPU1 Oct 28 23:44:01.802651 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 28 23:44:01.802658 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 28 23:44:01.802666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:44:01.802675 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 28 23:44:01.802683 kernel: Detected PIPT I-cache on CPU2 Oct 28 23:44:01.802690 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 28 23:44:01.802698 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 28 23:44:01.802705 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:44:01.802714 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 28 23:44:01.802722 kernel: Detected PIPT I-cache on CPU3 Oct 28 23:44:01.802729 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 28 23:44:01.802737 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 28 23:44:01.802744 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:44:01.802751 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 28 23:44:01.802759 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 23:44:01.802767 kernel: SMP: Total of 4 processors activated. Oct 28 23:44:01.802774 kernel: CPU: All CPU(s) started at EL1 Oct 28 23:44:01.802783 kernel: CPU features: detected: 32-bit EL0 Support Oct 28 23:44:01.802791 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 28 23:44:01.802798 kernel: CPU features: detected: Common not Private translations Oct 28 23:44:01.802805 kernel: CPU features: detected: CRC32 instructions Oct 28 23:44:01.802813 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 28 23:44:01.802820 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 28 23:44:01.802828 kernel: CPU features: detected: LSE atomic instructions Oct 28 23:44:01.802835 kernel: CPU features: detected: Privileged Access Never Oct 28 23:44:01.802843 kernel: CPU features: detected: RAS Extension Support Oct 28 23:44:01.802851 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 28 23:44:01.802859 kernel: alternatives: applying system-wide alternatives Oct 28 23:44:01.802867 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 28 23:44:01.802875 kernel: Memory: 2424416K/2572288K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 125536K reserved, 16384K cma-reserved) Oct 28 23:44:01.802882 kernel: devtmpfs: initialized Oct 28 23:44:01.802890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 23:44:01.802897 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 23:44:01.802905 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 28 23:44:01.802912 kernel: 0 pages in range for non-PLT usage Oct 28 23:44:01.802921 kernel: 508560 pages in range for PLT usage Oct 28 23:44:01.802928 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 23:44:01.802935 kernel: SMBIOS 3.0.0 present. Oct 28 23:44:01.802943 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 28 23:44:01.802950 kernel: DMI: Memory slots populated: 1/1 Oct 28 23:44:01.802957 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 23:44:01.802965 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 28 23:44:01.802972 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 28 23:44:01.802980 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 28 23:44:01.802989 kernel: audit: initializing netlink subsys (disabled) Oct 28 23:44:01.802996 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 28 23:44:01.803004 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 23:44:01.803011 kernel: cpuidle: using governor menu Oct 28 23:44:01.803018 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 28 23:44:01.803026 kernel: ASID allocator initialised with 32768 entries Oct 28 23:44:01.803034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 23:44:01.803041 kernel: Serial: AMBA PL011 UART driver Oct 28 23:44:01.803048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 23:44:01.803057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 23:44:01.803064 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 28 23:44:01.803072 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 28 23:44:01.803079 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 23:44:01.803086 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 23:44:01.803148 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 28 23:44:01.803156 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 28 23:44:01.803163 kernel: ACPI: Added _OSI(Module Device) Oct 28 23:44:01.803171 kernel: ACPI: Added _OSI(Processor Device) Oct 28 23:44:01.803181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 23:44:01.803189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 23:44:01.803196 kernel: ACPI: Interpreter enabled Oct 28 23:44:01.803204 kernel: ACPI: Using GIC for interrupt routing Oct 28 23:44:01.803211 kernel: ACPI: MCFG table detected, 1 entries Oct 28 23:44:01.803219 kernel: ACPI: CPU0 has been hot-added Oct 28 23:44:01.803226 kernel: ACPI: CPU1 has been hot-added Oct 28 23:44:01.803233 kernel: ACPI: CPU2 has been hot-added Oct 28 23:44:01.803241 kernel: ACPI: CPU3 has been hot-added Oct 28 23:44:01.803248 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 28 23:44:01.803257 kernel: printk: legacy console [ttyAMA0] enabled Oct 28 23:44:01.803265 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 23:44:01.803389 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 23:44:01.803476 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 28 23:44:01.803542 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 28 23:44:01.803601 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 28 23:44:01.803674 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 28 23:44:01.803689 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 28 23:44:01.803696 kernel: PCI host bridge to bus 0000:00 Oct 28 23:44:01.803764 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 28 23:44:01.803821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 28 23:44:01.803876 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 28 23:44:01.803931 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 23:44:01.804010 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 28 23:44:01.804122 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 23:44:01.804204 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 28 23:44:01.804268 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 28 23:44:01.804330 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 28 23:44:01.804391 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 28 23:44:01.804467 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 28 23:44:01.804540 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 28 23:44:01.804598 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 28 23:44:01.804666 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 28 23:44:01.804723 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 28 23:44:01.804732 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 28 23:44:01.804740 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 28 23:44:01.804748 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 28 23:44:01.804755 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 28 23:44:01.804765 kernel: iommu: Default domain type: Translated Oct 28 23:44:01.804773 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 28 23:44:01.804781 kernel: efivars: Registered efivars operations Oct 28 23:44:01.804788 kernel: vgaarb: loaded Oct 28 23:44:01.804796 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 28 23:44:01.804803 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 23:44:01.804811 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 23:44:01.804818 kernel: pnp: PnP ACPI init Oct 28 23:44:01.804893 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 28 23:44:01.804906 kernel: pnp: PnP ACPI: found 1 devices Oct 28 23:44:01.804913 kernel: NET: Registered PF_INET protocol family Oct 28 23:44:01.804921 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 23:44:01.804928 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 23:44:01.804936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 23:44:01.804944 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 23:44:01.804951 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 23:44:01.804959 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 23:44:01.804968 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:44:01.804976 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:44:01.804984 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 23:44:01.804991 kernel: PCI: CLS 0 bytes, default 64 Oct 28 23:44:01.804999 kernel: kvm [1]: HYP mode not available Oct 28 23:44:01.805006 kernel: Initialise system trusted keyrings Oct 28 23:44:01.805014 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 23:44:01.805021 kernel: Key type asymmetric registered Oct 28 23:44:01.805029 kernel: Asymmetric key parser 'x509' registered Oct 28 23:44:01.805038 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 28 23:44:01.805046 kernel: io scheduler mq-deadline registered Oct 28 23:44:01.805053 kernel: io scheduler kyber registered Oct 28 23:44:01.805061 kernel: io scheduler bfq registered Oct 28 23:44:01.805068 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 28 23:44:01.805076 kernel: ACPI: button: Power Button [PWRB] Oct 28 23:44:01.805085 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 28 23:44:01.805147 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 28 23:44:01.805157 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 23:44:01.805166 kernel: thunder_xcv, ver 1.0 Oct 28 23:44:01.805173 kernel: thunder_bgx, ver 1.0 Oct 28 23:44:01.805181 kernel: nicpf, ver 1.0 Oct 28 23:44:01.805188 kernel: nicvf, ver 1.0 Oct 28 23:44:01.805258 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 28 23:44:01.805315 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-28T23:44:01 UTC (1761695041) Oct 28 23:44:01.805325 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 28 23:44:01.805333 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 28 23:44:01.805342 kernel: watchdog: NMI not fully supported Oct 28 23:44:01.805349 kernel: watchdog: Hard watchdog permanently disabled Oct 28 23:44:01.805357 kernel: NET: Registered PF_INET6 protocol family Oct 28 23:44:01.805364 kernel: Segment Routing with IPv6 Oct 28 23:44:01.805372 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 23:44:01.805379 kernel: NET: Registered PF_PACKET protocol family Oct 28 23:44:01.805386 kernel: Key type dns_resolver registered Oct 28 23:44:01.805393 kernel: registered taskstats version 1 Oct 28 23:44:01.805401 kernel: Loading compiled-in X.509 certificates Oct 28 23:44:01.805408 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 3e6135fe5840578e056591d2f640b860b56ac0c2' Oct 28 23:44:01.805417 kernel: Demotion targets for Node 0: null Oct 28 23:44:01.805424 kernel: Key type .fscrypt registered Oct 28 23:44:01.805432 kernel: Key type fscrypt-provisioning registered Oct 28 23:44:01.805446 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 23:44:01.805453 kernel: ima: Allocated hash algorithm: sha1 Oct 28 23:44:01.805461 kernel: ima: No architecture policies found Oct 28 23:44:01.805468 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 28 23:44:01.805476 kernel: clk: Disabling unused clocks Oct 28 23:44:01.805483 kernel: PM: genpd: Disabling unused power domains Oct 28 23:44:01.805492 kernel: Warning: unable to open an initial console. Oct 28 23:44:01.805500 kernel: Freeing unused kernel memory: 38976K Oct 28 23:44:01.805507 kernel: Run /init as init process Oct 28 23:44:01.805515 kernel: with arguments: Oct 28 23:44:01.805522 kernel: /init Oct 28 23:44:01.805529 kernel: with environment: Oct 28 23:44:01.805536 kernel: HOME=/ Oct 28 23:44:01.805543 kernel: TERM=linux Oct 28 23:44:01.805552 systemd[1]: Successfully made /usr/ read-only. Oct 28 23:44:01.805564 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:44:01.805572 systemd[1]: Detected virtualization kvm. Oct 28 23:44:01.805580 systemd[1]: Detected architecture arm64. Oct 28 23:44:01.805587 systemd[1]: Running in initrd. Oct 28 23:44:01.805595 systemd[1]: No hostname configured, using default hostname. Oct 28 23:44:01.805603 systemd[1]: Hostname set to . Oct 28 23:44:01.805611 systemd[1]: Initializing machine ID from VM UUID. Oct 28 23:44:01.805625 systemd[1]: Queued start job for default target initrd.target. Oct 28 23:44:01.805634 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:44:01.805642 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:44:01.805650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 23:44:01.805658 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:44:01.805666 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 23:44:01.805675 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 23:44:01.805685 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 28 23:44:01.805693 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 28 23:44:01.805701 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:44:01.805709 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:44:01.805717 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:44:01.805725 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:44:01.805733 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:44:01.805741 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:44:01.805751 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:44:01.805759 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:44:01.805767 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 23:44:01.805775 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 23:44:01.805783 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:44:01.805791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:44:01.805799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:44:01.805807 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:44:01.805815 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 23:44:01.805825 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:44:01.805832 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 23:44:01.805841 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 23:44:01.805849 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 23:44:01.805857 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:44:01.805865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:44:01.805873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:44:01.805881 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 23:44:01.805891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:44:01.805899 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 23:44:01.805907 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 23:44:01.805930 systemd-journald[243]: Collecting audit messages is disabled. Oct 28 23:44:01.805951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:44:01.805960 systemd-journald[243]: Journal started Oct 28 23:44:01.805979 systemd-journald[243]: Runtime Journal (/run/log/journal/93f19a88ac4e4705b12d820b354ff1f0) is 6M, max 48.5M, 42.4M free. Oct 28 23:44:01.810535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 23:44:01.796842 systemd-modules-load[247]: Inserted module 'overlay' Oct 28 23:44:01.814215 systemd-modules-load[247]: Inserted module 'br_netfilter' Oct 28 23:44:01.815246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 23:44:01.815266 kernel: Bridge firewalling registered Oct 28 23:44:01.818631 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:44:01.830635 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:44:01.832035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:44:01.836814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:44:01.838474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:44:01.851220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:44:01.852836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:44:01.857169 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 23:44:01.861484 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:44:01.862822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:44:01.863330 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 23:44:01.867149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:44:01.872000 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:44:01.874374 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2617901133921edac864d90cb956099796bbbbfbc133441a2778ec034c4cf4d9 Oct 28 23:44:01.907144 systemd-resolved[300]: Positive Trust Anchors: Oct 28 23:44:01.907161 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:44:01.907192 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:44:01.912235 systemd-resolved[300]: Defaulting to hostname 'linux'. Oct 28 23:44:01.913308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:44:01.917675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:44:01.948486 kernel: SCSI subsystem initialized Oct 28 23:44:01.953459 kernel: Loading iSCSI transport class v2.0-870. Oct 28 23:44:01.961480 kernel: iscsi: registered transport (tcp) Oct 28 23:44:01.974489 kernel: iscsi: registered transport (qla4xxx) Oct 28 23:44:01.974540 kernel: QLogic iSCSI HBA Driver Oct 28 23:44:01.990583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:44:02.006983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:44:02.009276 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:44:02.049989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 23:44:02.052320 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 23:44:02.109478 kernel: raid6: neonx8 gen() 15546 MB/s Oct 28 23:44:02.126467 kernel: raid6: neonx4 gen() 15017 MB/s Oct 28 23:44:02.143468 kernel: raid6: neonx2 gen() 12546 MB/s Oct 28 23:44:02.160468 kernel: raid6: neonx1 gen() 10413 MB/s Oct 28 23:44:02.177467 kernel: raid6: int64x8 gen() 6887 MB/s Oct 28 23:44:02.194466 kernel: raid6: int64x4 gen() 7337 MB/s Oct 28 23:44:02.211468 kernel: raid6: int64x2 gen() 6092 MB/s Oct 28 23:44:02.228659 kernel: raid6: int64x1 gen() 5044 MB/s Oct 28 23:44:02.228682 kernel: raid6: using algorithm neonx8 gen() 15546 MB/s Oct 28 23:44:02.246692 kernel: raid6: .... xor() 12038 MB/s, rmw enabled Oct 28 23:44:02.246716 kernel: raid6: using neon recovery algorithm Oct 28 23:44:02.252995 kernel: xor: measuring software checksum speed Oct 28 23:44:02.253017 kernel: 8regs : 20541 MB/sec Oct 28 23:44:02.253026 kernel: 32regs : 21653 MB/sec Oct 28 23:44:02.253663 kernel: arm64_neon : 28013 MB/sec Oct 28 23:44:02.253682 kernel: xor: using function: arm64_neon (28013 MB/sec) Oct 28 23:44:02.306473 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 23:44:02.312851 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:44:02.316120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:44:02.343323 systemd-udevd[501]: Using default interface naming scheme 'v255'. Oct 28 23:44:02.347380 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:44:02.351577 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 23:44:02.371968 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Oct 28 23:44:02.396066 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:44:02.398512 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:44:02.451311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:44:02.454182 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 23:44:02.499462 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 28 23:44:02.510700 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 28 23:44:02.512525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:44:02.512661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:44:02.522631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:44:02.524783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:44:02.529463 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 23:44:02.529526 kernel: GPT:9289727 != 19775487 Oct 28 23:44:02.529537 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 23:44:02.529548 kernel: GPT:9289727 != 19775487 Oct 28 23:44:02.529557 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 23:44:02.529566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 23:44:02.561093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:44:02.570029 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 23:44:02.571763 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 23:44:02.582027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 23:44:02.589226 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 23:44:02.590649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 28 23:44:02.600571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:44:02.601936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:44:02.604356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:44:02.606723 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:44:02.609686 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 23:44:02.611556 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 23:44:02.636422 disk-uuid[593]: Primary Header is updated. Oct 28 23:44:02.636422 disk-uuid[593]: Secondary Entries is updated. Oct 28 23:44:02.636422 disk-uuid[593]: Secondary Header is updated. Oct 28 23:44:02.642032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 23:44:02.644313 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:44:03.648468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 23:44:03.649103 disk-uuid[598]: The operation has completed successfully. Oct 28 23:44:03.673725 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 23:44:03.674891 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 23:44:03.709238 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 28 23:44:03.734648 sh[615]: Success Oct 28 23:44:03.747864 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 23:44:03.747912 kernel: device-mapper: uevent: version 1.0.3 Oct 28 23:44:03.749233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 23:44:03.758476 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 28 23:44:03.791802 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 28 23:44:03.794959 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 28 23:44:03.810722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 28 23:44:03.820206 kernel: BTRFS: device fsid 7512c523-1bf8-4957-99f8-820cd4fd1b77 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (627) Oct 28 23:44:03.820255 kernel: BTRFS info (device dm-0): first mount of filesystem 7512c523-1bf8-4957-99f8-820cd4fd1b77 Oct 28 23:44:03.820267 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:44:03.826247 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 23:44:03.826294 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 23:44:03.827468 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 28 23:44:03.829923 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:44:03.832539 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 23:44:03.833347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 23:44:03.836331 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 23:44:03.861505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (658) Oct 28 23:44:03.861553 kernel: BTRFS info (device vda6): first mount of filesystem ba3f0202-aa83-40f0-8bc2-5783de720729 Oct 28 23:44:03.861565 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:44:03.866457 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:44:03.866509 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:44:03.871456 kernel: BTRFS info (device vda6): last unmount of filesystem ba3f0202-aa83-40f0-8bc2-5783de720729 Oct 28 23:44:03.873475 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 23:44:03.876327 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 23:44:03.937458 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:44:03.940719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:44:03.977735 systemd-networkd[801]: lo: Link UP Oct 28 23:44:03.977750 systemd-networkd[801]: lo: Gained carrier Oct 28 23:44:03.978512 systemd-networkd[801]: Enumeration completed Oct 28 23:44:03.978685 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:44:03.979087 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 28 23:44:03.979090 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:44:03.980381 systemd[1]: Reached target network.target - Network. Oct 28 23:44:03.981432 systemd-networkd[801]: eth0: Link UP Oct 28 23:44:03.981745 systemd-networkd[801]: eth0: Gained carrier Oct 28 23:44:03.981755 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 28 23:44:03.993527 ignition[708]: Ignition 2.22.0 Oct 28 23:44:03.993540 ignition[708]: Stage: fetch-offline Oct 28 23:44:03.993622 ignition[708]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:03.993632 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:03.993799 ignition[708]: parsed url from cmdline: "" Oct 28 23:44:03.993803 ignition[708]: no config URL provided Oct 28 23:44:03.993810 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 23:44:03.993821 ignition[708]: no config at "/usr/lib/ignition/user.ign" Oct 28 23:44:03.993844 ignition[708]: op(1): [started] loading QEMU firmware config module Oct 28 23:44:03.993849 ignition[708]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 23:44:03.998869 ignition[708]: op(1): [finished] loading QEMU firmware config module Oct 28 23:44:04.002503 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:44:04.046180 ignition[708]: parsing config with SHA512: 928e188aee9e10ce937303bbc4c208487255087cfd1a9683a70f9e9dc14ac863183676df02ed5bf778f569a37fa3c4c8674b7792347edbe7e79a9a1c7ecca4e0 Oct 28 23:44:04.051945 unknown[708]: fetched base config from "system" Oct 28 23:44:04.051964 unknown[708]: fetched user config from "qemu" Oct 28 23:44:04.052462 ignition[708]: fetch-offline: fetch-offline passed Oct 28 23:44:04.054426 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:44:04.052529 ignition[708]: Ignition finished successfully Oct 28 23:44:04.056039 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 23:44:04.056770 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 23:44:04.085962 ignition[814]: Ignition 2.22.0 Oct 28 23:44:04.085975 ignition[814]: Stage: kargs Oct 28 23:44:04.086114 ignition[814]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:04.086123 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:04.087008 ignition[814]: kargs: kargs passed Oct 28 23:44:04.087051 ignition[814]: Ignition finished successfully Oct 28 23:44:04.090064 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 23:44:04.092157 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 23:44:04.120405 ignition[822]: Ignition 2.22.0 Oct 28 23:44:04.120421 ignition[822]: Stage: disks Oct 28 23:44:04.120560 ignition[822]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:04.123771 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 23:44:04.120569 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:04.125133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 23:44:04.121294 ignition[822]: disks: disks passed Oct 28 23:44:04.127086 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 23:44:04.121333 ignition[822]: Ignition finished successfully Oct 28 23:44:04.129310 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:44:04.131381 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:44:04.133009 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:44:04.136037 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 23:44:04.157593 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 28 23:44:04.162074 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 23:44:04.164614 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 23:44:04.227458 kernel: EXT4-fs (vda9): mounted filesystem aae5f5ce-7447-4d1d-a4a3-9ebe9fae06e0 r/w with ordered data mode. Quota mode: none. Oct 28 23:44:04.227935 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 23:44:04.229352 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 23:44:04.231974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:44:04.233695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 23:44:04.234797 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 23:44:04.234836 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 23:44:04.234857 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:44:04.254036 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 23:44:04.258025 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 23:44:04.262560 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Oct 28 23:44:04.262601 kernel: BTRFS info (device vda6): first mount of filesystem ba3f0202-aa83-40f0-8bc2-5783de720729 Oct 28 23:44:04.262624 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:44:04.262636 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:44:04.262646 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:44:04.265808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:44:04.302910 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 23:44:04.307398 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Oct 28 23:44:04.310295 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 23:44:04.314263 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 23:44:04.385502 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 23:44:04.387882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 23:44:04.389575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 23:44:04.407475 kernel: BTRFS info (device vda6): last unmount of filesystem ba3f0202-aa83-40f0-8bc2-5783de720729 Oct 28 23:44:04.415925 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 23:44:04.441835 ignition[956]: INFO : Ignition 2.22.0 Oct 28 23:44:04.441835 ignition[956]: INFO : Stage: mount Oct 28 23:44:04.443554 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:04.443554 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:04.443554 ignition[956]: INFO : mount: mount passed Oct 28 23:44:04.443554 ignition[956]: INFO : Ignition finished successfully Oct 28 23:44:04.444798 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 23:44:04.448951 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 23:44:04.817901 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 23:44:04.819327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:44:04.844191 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Oct 28 23:44:04.844225 kernel: BTRFS info (device vda6): first mount of filesystem ba3f0202-aa83-40f0-8bc2-5783de720729 Oct 28 23:44:04.844237 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:44:04.847873 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:44:04.847902 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:44:04.849112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:44:04.879123 ignition[986]: INFO : Ignition 2.22.0 Oct 28 23:44:04.879123 ignition[986]: INFO : Stage: files Oct 28 23:44:04.880862 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:04.880862 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:04.880862 ignition[986]: DEBUG : files: compiled without relabeling support, skipping Oct 28 23:44:04.884299 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 23:44:04.884299 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 23:44:04.887121 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 23:44:04.887121 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 23:44:04.887121 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 23:44:04.886426 unknown[986]: wrote ssh authorized keys file for user: core Oct 28 23:44:04.892226 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:44:04.894286 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 28 23:44:05.088119 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 23:44:05.253842 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:44:05.256068 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:44:05.270560 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 28 23:44:05.623484 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 23:44:05.803602 systemd-networkd[801]: eth0: Gained IPv6LL Oct 28 23:44:05.990247 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 28 23:44:05.990247 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 23:44:05.996137 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 23:44:06.011917 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:44:06.014381 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:44:06.016060 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 23:44:06.016060 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 23:44:06.016060 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 23:44:06.016060 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:44:06.016060 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:44:06.016060 ignition[986]: INFO : files: files passed Oct 28 23:44:06.016060 ignition[986]: INFO : Ignition finished successfully Oct 28 23:44:06.018054 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 23:44:06.023723 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 23:44:06.026603 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 23:44:06.040785 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 23:44:06.040924 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 23:44:06.044945 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 23:44:06.046540 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:44:06.046540 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:44:06.050153 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:44:06.049260 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:44:06.051689 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 23:44:06.053571 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 23:44:06.085175 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 23:44:06.085289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 23:44:06.087736 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 23:44:06.089669 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 23:44:06.091516 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 23:44:06.092315 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 23:44:06.106973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:44:06.109598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 23:44:06.130470 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:44:06.131771 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:44:06.133873 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 23:44:06.135695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 23:44:06.135818 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:44:06.138396 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 23:44:06.140493 systemd[1]: Stopped target basic.target - Basic System. Oct 28 23:44:06.142229 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 23:44:06.144010 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:44:06.146009 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 23:44:06.148029 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:44:06.150026 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 23:44:06.151920 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:44:06.153952 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 23:44:06.156034 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 23:44:06.157886 systemd[1]: Stopped target swap.target - Swaps. Oct 28 23:44:06.159542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 23:44:06.159675 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:44:06.162314 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:44:06.163578 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:44:06.165630 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 23:44:06.166521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:44:06.167966 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 23:44:06.168129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 23:44:06.170264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 23:44:06.170423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:44:06.173851 systemd[1]: Stopped target paths.target - Path Units. Oct 28 23:44:06.181171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 23:44:06.181293 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:44:06.185487 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 23:44:06.191980 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 23:44:06.193872 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 23:44:06.193957 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:44:06.195853 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 23:44:06.195971 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:44:06.197586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 23:44:06.197760 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:44:06.199792 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 23:44:06.199928 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 23:44:06.202904 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 23:44:06.205288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 23:44:06.206259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 23:44:06.206461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:44:06.208529 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 23:44:06.208688 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:44:06.216161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 23:44:06.222470 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 23:44:06.230109 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 23:44:06.236778 ignition[1040]: INFO : Ignition 2.22.0 Oct 28 23:44:06.236778 ignition[1040]: INFO : Stage: umount Oct 28 23:44:06.239419 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:44:06.239419 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:44:06.239419 ignition[1040]: INFO : umount: umount passed Oct 28 23:44:06.239419 ignition[1040]: INFO : Ignition finished successfully Oct 28 23:44:06.236781 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 23:44:06.236885 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 23:44:06.239713 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 23:44:06.239799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 23:44:06.241892 systemd[1]: Stopped target network.target - Network. Oct 28 23:44:06.245170 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 23:44:06.245246 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 23:44:06.247007 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 23:44:06.247054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 23:44:06.248771 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 23:44:06.248823 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 23:44:06.250567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 23:44:06.250622 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 23:44:06.252437 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 23:44:06.252520 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 23:44:06.254510 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 23:44:06.256392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 23:44:06.269241 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 23:44:06.269371 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 23:44:06.273260 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 28 23:44:06.273531 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 23:44:06.273629 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 23:44:06.277282 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 28 23:44:06.277867 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 23:44:06.280560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 23:44:06.280597 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:44:06.283397 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 23:44:06.284527 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 23:44:06.284616 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:44:06.286957 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 23:44:06.287003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:44:06.290149 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 23:44:06.290199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 23:44:06.292836 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 23:44:06.292881 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:44:06.295943 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:44:06.300575 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 28 23:44:06.300646 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 28 23:44:06.317082 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 23:44:06.317261 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:44:06.319740 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 23:44:06.319847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 23:44:06.322298 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 23:44:06.322361 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 23:44:06.323820 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 23:44:06.323854 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:44:06.325618 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 23:44:06.325667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:44:06.328474 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 23:44:06.328518 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 23:44:06.331421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 23:44:06.331487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:44:06.335301 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 23:44:06.336523 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 23:44:06.336584 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:44:06.339671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 23:44:06.339716 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:44:06.343403 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 28 23:44:06.343463 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:44:06.347110 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 23:44:06.347153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:44:06.349476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:44:06.349527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:44:06.353869 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 28 23:44:06.353921 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 28 23:44:06.353950 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 28 23:44:06.353979 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 28 23:44:06.364738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 23:44:06.365514 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 23:44:06.367121 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 23:44:06.369780 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 23:44:06.378869 systemd[1]: Switching root. Oct 28 23:44:06.417092 systemd-journald[243]: Journal stopped Oct 28 23:44:07.164383 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Oct 28 23:44:07.164429 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 23:44:07.164469 kernel: SELinux: policy capability open_perms=1 Oct 28 23:44:07.164480 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 23:44:07.164495 kernel: SELinux: policy capability always_check_network=0 Oct 28 23:44:07.164509 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 23:44:07.164519 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 23:44:07.164529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 23:44:07.164542 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 23:44:07.164551 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 23:44:07.164562 systemd[1]: Successfully loaded SELinux policy in 58.848ms. Oct 28 23:44:07.164574 kernel: audit: type=1403 audit(1761695046.580:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 23:44:07.164584 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.361ms. Oct 28 23:44:07.164595 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:44:07.164614 systemd[1]: Detected virtualization kvm. Oct 28 23:44:07.164625 systemd[1]: Detected architecture arm64. Oct 28 23:44:07.164640 systemd[1]: Detected first boot. Oct 28 23:44:07.164650 systemd[1]: Initializing machine ID from VM UUID. Oct 28 23:44:07.164661 zram_generator::config[1084]: No configuration found. Oct 28 23:44:07.164671 kernel: NET: Registered PF_VSOCK protocol family Oct 28 23:44:07.164681 systemd[1]: Populated /etc with preset unit settings. Oct 28 23:44:07.164691 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 28 23:44:07.164701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 23:44:07.164711 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 23:44:07.164722 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 23:44:07.164733 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 23:44:07.164743 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 23:44:07.164753 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 23:44:07.164763 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 23:44:07.164773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 23:44:07.164787 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 23:44:07.164797 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 23:44:07.164807 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 23:44:07.164818 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:44:07.164828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:44:07.164838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 23:44:07.164848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 23:44:07.164858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 23:44:07.164868 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:44:07.164879 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 28 23:44:07.164889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:44:07.164901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:44:07.164911 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 23:44:07.164921 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 23:44:07.164931 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 23:44:07.164941 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 23:44:07.164952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:44:07.164962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:44:07.164972 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:44:07.164983 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:44:07.164994 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 23:44:07.165004 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 23:44:07.165014 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 23:44:07.165025 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:44:07.165035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:44:07.165045 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:44:07.165055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 23:44:07.165065 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 23:44:07.165076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 23:44:07.165087 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 23:44:07.165097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 23:44:07.165107 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 23:44:07.165117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 23:44:07.165127 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 23:44:07.165138 systemd[1]: Reached target machines.target - Containers. Oct 28 23:44:07.165148 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 23:44:07.165158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:44:07.165169 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:44:07.165179 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 23:44:07.165189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:44:07.165202 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:44:07.165212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:44:07.165222 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 23:44:07.165236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:44:07.165283 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 23:44:07.165306 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 23:44:07.165317 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 23:44:07.165327 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 23:44:07.165337 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 23:44:07.165346 kernel: fuse: init (API version 7.41) Oct 28 23:44:07.165355 kernel: loop: module loaded Oct 28 23:44:07.165366 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:44:07.165376 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:44:07.165386 kernel: ACPI: bus type drm_connector registered Oct 28 23:44:07.165397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:44:07.165408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:44:07.165418 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 23:44:07.165428 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 23:44:07.165449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:44:07.165462 systemd[1]: verity-setup.service: Deactivated successfully. Oct 28 23:44:07.165472 systemd[1]: Stopped verity-setup.service. Oct 28 23:44:07.165501 systemd-journald[1159]: Collecting audit messages is disabled. Oct 28 23:44:07.165524 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 23:44:07.165535 systemd-journald[1159]: Journal started Oct 28 23:44:07.165554 systemd-journald[1159]: Runtime Journal (/run/log/journal/93f19a88ac4e4705b12d820b354ff1f0) is 6M, max 48.5M, 42.4M free. Oct 28 23:44:06.933412 systemd[1]: Queued start job for default target multi-user.target. Oct 28 23:44:06.951553 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 23:44:06.951967 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 23:44:07.167721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 23:44:07.169878 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:44:07.170552 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 23:44:07.171766 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 23:44:07.173103 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 23:44:07.174508 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 23:44:07.175835 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 23:44:07.177396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:44:07.179086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 23:44:07.179255 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 23:44:07.180854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:44:07.181028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:44:07.182555 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:44:07.182724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:44:07.184172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:44:07.184348 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:44:07.185978 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 23:44:07.186149 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 23:44:07.187838 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:44:07.188000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:44:07.189614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:44:07.191311 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:44:07.193045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 23:44:07.194764 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 23:44:07.208476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:44:07.210426 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:44:07.212825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 23:44:07.214940 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 23:44:07.216291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 23:44:07.216330 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:44:07.218235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 23:44:07.225238 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 23:44:07.226790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:44:07.228143 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 23:44:07.230290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 23:44:07.231662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:44:07.232581 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 23:44:07.235541 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:44:07.238813 systemd-journald[1159]: Time spent on flushing to /var/log/journal/93f19a88ac4e4705b12d820b354ff1f0 is 20.838ms for 887 entries. Oct 28 23:44:07.238813 systemd-journald[1159]: System Journal (/var/log/journal/93f19a88ac4e4705b12d820b354ff1f0) is 8M, max 195.6M, 187.6M free. Oct 28 23:44:07.267220 systemd-journald[1159]: Received client request to flush runtime journal. Oct 28 23:44:07.267253 kernel: loop0: detected capacity change from 0 to 200800 Oct 28 23:44:07.239277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:44:07.242481 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 23:44:07.245878 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 23:44:07.248618 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 23:44:07.250859 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 23:44:07.256467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 23:44:07.258103 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 23:44:07.263592 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 23:44:07.269388 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 23:44:07.276455 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 23:44:07.276814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:44:07.282125 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Oct 28 23:44:07.282145 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Oct 28 23:44:07.285626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:44:07.288581 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 23:44:07.298418 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 23:44:07.299064 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 23:44:07.301458 kernel: loop1: detected capacity change from 0 to 100632 Oct 28 23:44:07.315711 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 23:44:07.320450 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:44:07.322465 kernel: loop2: detected capacity change from 0 to 119368 Oct 28 23:44:07.342010 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Oct 28 23:44:07.342030 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Oct 28 23:44:07.345042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:44:07.358466 kernel: loop3: detected capacity change from 0 to 200800 Oct 28 23:44:07.365504 kernel: loop4: detected capacity change from 0 to 100632 Oct 28 23:44:07.372471 kernel: loop5: detected capacity change from 0 to 119368 Oct 28 23:44:07.377920 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 28 23:44:07.378347 (sd-merge)[1226]: Merged extensions into '/usr'. Oct 28 23:44:07.381723 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 23:44:07.381859 systemd[1]: Reloading... Oct 28 23:44:07.439462 zram_generator::config[1252]: No configuration found. Oct 28 23:44:07.487286 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 23:44:07.577109 systemd[1]: Reloading finished in 194 ms. Oct 28 23:44:07.602237 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 23:44:07.605034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 23:44:07.621590 systemd[1]: Starting ensure-sysext.service... Oct 28 23:44:07.623519 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:44:07.632126 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Oct 28 23:44:07.632139 systemd[1]: Reloading... Oct 28 23:44:07.636529 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 23:44:07.637088 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 23:44:07.637348 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 23:44:07.637713 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 23:44:07.638377 systemd-tmpfiles[1288]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 23:44:07.638705 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 28 23:44:07.638827 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 28 23:44:07.641587 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:44:07.641696 systemd-tmpfiles[1288]: Skipping /boot Oct 28 23:44:07.647857 systemd-tmpfiles[1288]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:44:07.647874 systemd-tmpfiles[1288]: Skipping /boot Oct 28 23:44:07.679476 zram_generator::config[1314]: No configuration found. Oct 28 23:44:07.809514 systemd[1]: Reloading finished in 177 ms. Oct 28 23:44:07.826937 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 23:44:07.833531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:44:07.840423 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:44:07.842998 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 23:44:07.845512 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 23:44:07.848432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:44:07.851419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:44:07.856090 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 23:44:07.862488 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 23:44:07.865071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:44:07.870506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:44:07.874526 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:44:07.877666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:44:07.879125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:44:07.879239 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:44:07.881093 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 23:44:07.883409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:44:07.883581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:44:07.885520 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:44:07.885681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:44:07.892046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:44:07.892217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:44:07.898287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:44:07.898540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:44:07.898964 augenrules[1382]: No rules Oct 28 23:44:07.899860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 23:44:07.902083 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Oct 28 23:44:07.902241 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:44:07.902456 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:44:07.908181 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 23:44:07.918640 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 23:44:07.920666 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:44:07.928781 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:44:07.930294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:44:07.931357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:44:07.934504 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:44:07.953762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:44:07.957733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:44:07.958961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:44:07.959094 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:44:07.961798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:44:07.963067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 23:44:07.974566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 23:44:07.979585 augenrules[1406]: /sbin/augenrules: No change Oct 28 23:44:07.981464 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 23:44:07.983719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:44:07.983879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:44:07.986749 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:44:07.986943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:44:07.993728 systemd[1]: Finished ensure-sysext.service. Oct 28 23:44:07.995436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:44:07.995701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:44:07.995871 augenrules[1448]: No rules Oct 28 23:44:07.997337 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:44:07.997580 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:44:07.999867 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:44:08.000020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:44:08.014288 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 28 23:44:08.022197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:44:08.022244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:44:08.026370 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 23:44:08.061653 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:44:08.065711 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 23:44:08.084176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 23:44:08.091735 systemd-networkd[1432]: lo: Link UP Oct 28 23:44:08.092029 systemd-networkd[1432]: lo: Gained carrier Oct 28 23:44:08.092998 systemd-networkd[1432]: Enumeration completed Oct 28 23:44:08.093239 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:44:08.093681 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 28 23:44:08.093774 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:44:08.094383 systemd-networkd[1432]: eth0: Link UP Oct 28 23:44:08.094632 systemd-networkd[1432]: eth0: Gained carrier Oct 28 23:44:08.094704 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 28 23:44:08.099628 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 23:44:08.099693 systemd-resolved[1355]: Positive Trust Anchors: Oct 28 23:44:08.099705 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:44:08.099736 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:44:08.102493 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:44:08.103621 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 23:44:08.110330 systemd-resolved[1355]: Defaulting to hostname 'linux'. Oct 28 23:44:08.111537 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 23:44:08.113196 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:44:08.114576 systemd[1]: Reached target network.target - Network. Oct 28 23:44:08.115562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:44:08.116830 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:44:08.118210 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 23:44:08.119717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 23:44:08.121827 systemd-timesyncd[1467]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 23:44:08.121891 systemd-timesyncd[1467]: Initial clock synchronization to Tue 2025-10-28 23:44:08.065645 UTC. Oct 28 23:44:08.122046 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 23:44:08.123430 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 23:44:08.123475 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:44:08.124406 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 23:44:08.127033 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 23:44:08.128260 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 23:44:08.129640 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:44:08.131326 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 23:44:08.133884 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 23:44:08.138151 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 23:44:08.139697 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 23:44:08.140971 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 23:44:08.148159 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 23:44:08.149712 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 23:44:08.152611 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 23:44:08.155792 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 23:44:08.157672 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:44:08.158742 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:44:08.160668 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:44:08.160713 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:44:08.161813 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 23:44:08.165817 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 23:44:08.171903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 23:44:08.176632 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 23:44:08.181619 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 23:44:08.182756 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 23:44:08.190319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 23:44:08.195289 jq[1497]: false Oct 28 23:44:08.195667 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 23:44:08.197777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 23:44:08.202623 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 23:44:08.204741 extend-filesystems[1498]: Found /dev/vda6 Oct 28 23:44:08.206806 extend-filesystems[1498]: Found /dev/vda9 Oct 28 23:44:08.208527 extend-filesystems[1498]: Checking size of /dev/vda9 Oct 28 23:44:08.207972 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 23:44:08.209997 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 23:44:08.210403 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 23:44:08.211835 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 23:44:08.215279 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 23:44:08.219197 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 23:44:08.221044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 23:44:08.222400 jq[1518]: true Oct 28 23:44:08.222502 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 23:44:08.222801 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 23:44:08.223910 extend-filesystems[1498]: Resized partition /dev/vda9 Oct 28 23:44:08.222995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 23:44:08.227554 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 23:44:08.227767 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 23:44:08.229266 extend-filesystems[1523]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 23:44:08.235554 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 28 23:44:08.246101 update_engine[1517]: I20251028 23:44:08.245863 1517 main.cc:92] Flatcar Update Engine starting Oct 28 23:44:08.256866 jq[1524]: true Oct 28 23:44:08.263182 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 28 23:44:08.264828 (ntainerd)[1531]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 28 23:44:08.268269 dbus-daemon[1495]: [system] SELinux support is enabled Oct 28 23:44:08.268457 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 23:44:08.281925 update_engine[1517]: I20251028 23:44:08.271277 1517 update_check_scheduler.cc:74] Next update check in 6m2s Oct 28 23:44:08.272497 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 23:44:08.272631 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 23:44:08.276335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:44:08.277712 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 23:44:08.277845 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 23:44:08.284561 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 23:44:08.284561 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 23:44:08.284561 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 28 23:44:08.289766 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Oct 28 23:44:08.284587 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 23:44:08.287633 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 23:44:08.299210 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Oct 28 23:44:08.299614 systemd-logind[1510]: New seat seat0. Oct 28 23:44:08.300682 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 23:44:08.302930 systemd[1]: Started update-engine.service - Update Engine. Oct 28 23:44:08.307090 tar[1522]: linux-arm64/LICENSE Oct 28 23:44:08.308138 tar[1522]: linux-arm64/helm Oct 28 23:44:08.307616 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 23:44:08.324419 bash[1561]: Updated "/home/core/.ssh/authorized_keys" Oct 28 23:44:08.329633 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 23:44:08.332328 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 23:44:08.365920 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 23:44:08.369851 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:44:08.435848 containerd[1531]: time="2025-10-28T23:44:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 23:44:08.436390 containerd[1531]: time="2025-10-28T23:44:08.436350120Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 23:44:08.445321 containerd[1531]: time="2025-10-28T23:44:08.445271400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.6µs" Oct 28 23:44:08.445321 containerd[1531]: time="2025-10-28T23:44:08.445308000Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 23:44:08.445321 containerd[1531]: time="2025-10-28T23:44:08.445326840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 23:44:08.445645 containerd[1531]: time="2025-10-28T23:44:08.445495160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 23:44:08.445645 containerd[1531]: time="2025-10-28T23:44:08.445510600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 23:44:08.445645 containerd[1531]: time="2025-10-28T23:44:08.445532480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445645 containerd[1531]: time="2025-10-28T23:44:08.445577880Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445645 containerd[1531]: time="2025-10-28T23:44:08.445589480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445843 containerd[1531]: time="2025-10-28T23:44:08.445813840Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445843 containerd[1531]: time="2025-10-28T23:44:08.445836440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445888 containerd[1531]: time="2025-10-28T23:44:08.445848200Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:44:08.445888 containerd[1531]: time="2025-10-28T23:44:08.445857080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 23:44:08.446049 containerd[1531]: time="2025-10-28T23:44:08.445936000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 23:44:08.446172 containerd[1531]: time="2025-10-28T23:44:08.446147120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:44:08.446201 containerd[1531]: time="2025-10-28T23:44:08.446182680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:44:08.446201 containerd[1531]: time="2025-10-28T23:44:08.446193920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 23:44:08.446241 containerd[1531]: time="2025-10-28T23:44:08.446226000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 23:44:08.446826 containerd[1531]: time="2025-10-28T23:44:08.446453440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 23:44:08.446826 containerd[1531]: time="2025-10-28T23:44:08.446557920Z" level=info msg="metadata content store policy set" policy=shared Oct 28 23:44:08.461314 containerd[1531]: time="2025-10-28T23:44:08.461258720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 23:44:08.461416 containerd[1531]: time="2025-10-28T23:44:08.461350760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 23:44:08.461416 containerd[1531]: time="2025-10-28T23:44:08.461367280Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 23:44:08.461510 containerd[1531]: time="2025-10-28T23:44:08.461479080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 23:44:08.461541 containerd[1531]: time="2025-10-28T23:44:08.461509720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 23:44:08.461541 containerd[1531]: time="2025-10-28T23:44:08.461530640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 23:44:08.461584 containerd[1531]: time="2025-10-28T23:44:08.461544200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 23:44:08.461584 containerd[1531]: time="2025-10-28T23:44:08.461557280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 23:44:08.461584 containerd[1531]: time="2025-10-28T23:44:08.461568280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 23:44:08.461584 containerd[1531]: time="2025-10-28T23:44:08.461579440Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 23:44:08.461662 containerd[1531]: time="2025-10-28T23:44:08.461589160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 23:44:08.461662 containerd[1531]: time="2025-10-28T23:44:08.461611440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461752840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461786960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461803960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461815000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461825360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461835520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461847240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461859600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461875440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461886440Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 23:44:08.461933 containerd[1531]: time="2025-10-28T23:44:08.461898800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 23:44:08.462127 containerd[1531]: time="2025-10-28T23:44:08.462093000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 23:44:08.462127 containerd[1531]: time="2025-10-28T23:44:08.462109480Z" level=info msg="Start snapshots syncer" Oct 28 23:44:08.462160 containerd[1531]: time="2025-10-28T23:44:08.462137040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 23:44:08.462653 containerd[1531]: time="2025-10-28T23:44:08.462473480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 23:44:08.462653 containerd[1531]: time="2025-10-28T23:44:08.462532040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 23:44:08.462796 containerd[1531]: time="2025-10-28T23:44:08.462626560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 23:44:08.462796 containerd[1531]: time="2025-10-28T23:44:08.462746680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 23:44:08.462796 containerd[1531]: time="2025-10-28T23:44:08.462774560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 23:44:08.462796 containerd[1531]: time="2025-10-28T23:44:08.462785480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 23:44:08.462796 containerd[1531]: time="2025-10-28T23:44:08.462796080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 23:44:08.462874 containerd[1531]: time="2025-10-28T23:44:08.462807960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 23:44:08.462874 containerd[1531]: time="2025-10-28T23:44:08.462818480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 23:44:08.462874 containerd[1531]: time="2025-10-28T23:44:08.462830080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 23:44:08.462874 containerd[1531]: time="2025-10-28T23:44:08.462864520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 23:44:08.462938 containerd[1531]: time="2025-10-28T23:44:08.462877720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 23:44:08.462938 containerd[1531]: time="2025-10-28T23:44:08.462890960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 23:44:08.462970 containerd[1531]: time="2025-10-28T23:44:08.462938560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:44:08.462970 containerd[1531]: time="2025-10-28T23:44:08.462955040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:44:08.462970 containerd[1531]: time="2025-10-28T23:44:08.462964120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:44:08.463022 containerd[1531]: time="2025-10-28T23:44:08.462972680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:44:08.463022 containerd[1531]: time="2025-10-28T23:44:08.462980920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 23:44:08.463022 containerd[1531]: time="2025-10-28T23:44:08.462990120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 23:44:08.463022 containerd[1531]: time="2025-10-28T23:44:08.463000880Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 23:44:08.463083 containerd[1531]: time="2025-10-28T23:44:08.463076280Z" level=info msg="runtime interface created" Oct 28 23:44:08.463083 containerd[1531]: time="2025-10-28T23:44:08.463081920Z" level=info msg="created NRI interface" Oct 28 23:44:08.463619 containerd[1531]: time="2025-10-28T23:44:08.463180800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 23:44:08.463619 containerd[1531]: time="2025-10-28T23:44:08.463202560Z" level=info msg="Connect containerd service" Oct 28 23:44:08.463619 containerd[1531]: time="2025-10-28T23:44:08.463250360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 23:44:08.464249 containerd[1531]: time="2025-10-28T23:44:08.464206480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 23:44:08.541433 containerd[1531]: time="2025-10-28T23:44:08.541349800Z" level=info msg="Start subscribing containerd event" Oct 28 23:44:08.541433 containerd[1531]: time="2025-10-28T23:44:08.541430200Z" level=info msg="Start recovering state" Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541525880Z" level=info msg="Start event monitor" Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541539840Z" level=info msg="Start cni network conf syncer for default" Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541548800Z" level=info msg="Start streaming server" Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541557560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541564960Z" level=info msg="runtime interface starting up..." Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541572120Z" level=info msg="starting plugins..." Oct 28 23:44:08.541593 containerd[1531]: time="2025-10-28T23:44:08.541587360Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 23:44:08.541727 containerd[1531]: time="2025-10-28T23:44:08.541393680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 23:44:08.541828 containerd[1531]: time="2025-10-28T23:44:08.541744320Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 23:44:08.541828 containerd[1531]: time="2025-10-28T23:44:08.541799360Z" level=info msg="containerd successfully booted in 0.106391s" Oct 28 23:44:08.541912 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 23:44:08.609589 tar[1522]: linux-arm64/README.md Oct 28 23:44:08.628499 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 23:44:08.961414 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 23:44:08.981537 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 23:44:08.985478 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 23:44:09.015999 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 23:44:09.016255 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 23:44:09.019224 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 23:44:09.054688 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 23:44:09.057583 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 23:44:09.059667 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 28 23:44:09.061057 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 23:44:09.899559 systemd-networkd[1432]: eth0: Gained IPv6LL Oct 28 23:44:09.903479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 23:44:09.905188 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 23:44:09.907580 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 23:44:09.909940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:09.937037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 23:44:09.951897 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 23:44:09.952104 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 23:44:09.954521 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 23:44:09.956530 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 23:44:10.446593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:10.448842 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 23:44:10.452666 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:44:10.452911 systemd[1]: Startup finished in 2.057s (kernel) + 4.964s (initrd) + 3.931s (userspace) = 10.953s. Oct 28 23:44:10.748229 kubelet[1634]: E1028 23:44:10.748113 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:44:10.750523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:44:10.750659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:44:10.750965 systemd[1]: kubelet.service: Consumed 683ms CPU time, 248.9M memory peak. Oct 28 23:44:14.763762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 23:44:14.764754 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:37696.service - OpenSSH per-connection server daemon (10.0.0.1:37696). Oct 28 23:44:14.820716 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 37696 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:14.822254 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:14.828062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 23:44:14.828935 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 23:44:14.834988 systemd-logind[1510]: New session 1 of user core. Oct 28 23:44:14.856901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 23:44:14.861647 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 23:44:14.885094 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 28 23:44:14.887503 systemd-logind[1510]: New session c1 of user core. Oct 28 23:44:14.993894 systemd[1653]: Queued start job for default target default.target. Oct 28 23:44:15.011499 systemd[1653]: Created slice app.slice - User Application Slice. Oct 28 23:44:15.011527 systemd[1653]: Reached target paths.target - Paths. Oct 28 23:44:15.011566 systemd[1653]: Reached target timers.target - Timers. Oct 28 23:44:15.012763 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 23:44:15.024675 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 23:44:15.024785 systemd[1653]: Reached target sockets.target - Sockets. Oct 28 23:44:15.024823 systemd[1653]: Reached target basic.target - Basic System. Oct 28 23:44:15.024849 systemd[1653]: Reached target default.target - Main User Target. Oct 28 23:44:15.024874 systemd[1653]: Startup finished in 129ms. Oct 28 23:44:15.025893 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 23:44:15.027601 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 23:44:15.095716 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:37706.service - OpenSSH per-connection server daemon (10.0.0.1:37706). Oct 28 23:44:15.163657 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 37706 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:15.164876 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:15.169064 systemd-logind[1510]: New session 2 of user core. Oct 28 23:44:15.185680 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 28 23:44:15.238520 sshd[1667]: Connection closed by 10.0.0.1 port 37706 Oct 28 23:44:15.238548 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:15.251326 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:37706.service: Deactivated successfully. Oct 28 23:44:15.252675 systemd[1]: session-2.scope: Deactivated successfully. Oct 28 23:44:15.253288 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Oct 28 23:44:15.255302 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). Oct 28 23:44:15.256178 systemd-logind[1510]: Removed session 2. Oct 28 23:44:15.299358 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:15.300635 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:15.304969 systemd-logind[1510]: New session 3 of user core. Oct 28 23:44:15.315593 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 23:44:15.368894 sshd[1676]: Connection closed by 10.0.0.1 port 37710 Oct 28 23:44:15.369565 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:15.392590 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:37710.service: Deactivated successfully. Oct 28 23:44:15.395007 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 23:44:15.396636 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Oct 28 23:44:15.398747 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:37726.service - OpenSSH per-connection server daemon (10.0.0.1:37726). Oct 28 23:44:15.401367 systemd-logind[1510]: Removed session 3. Oct 28 23:44:15.457691 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 37726 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:15.458894 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:15.464239 systemd-logind[1510]: New session 4 of user core. Oct 28 23:44:15.478602 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 23:44:15.529746 sshd[1685]: Connection closed by 10.0.0.1 port 37726 Oct 28 23:44:15.529625 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:15.537175 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:37726.service: Deactivated successfully. Oct 28 23:44:15.539694 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 23:44:15.540317 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Oct 28 23:44:15.542296 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Oct 28 23:44:15.546470 systemd-logind[1510]: Removed session 4. Oct 28 23:44:15.597163 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:15.598343 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:15.602960 systemd-logind[1510]: New session 5 of user core. Oct 28 23:44:15.615654 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 23:44:15.672312 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 23:44:15.672615 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:44:15.683361 sudo[1695]: pam_unix(sudo:session): session closed for user root Oct 28 23:44:15.684648 sshd[1694]: Connection closed by 10.0.0.1 port 37730 Oct 28 23:44:15.685380 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:15.694322 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:37730.service: Deactivated successfully. Oct 28 23:44:15.695636 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 23:44:15.697102 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Oct 28 23:44:15.699133 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:37742.service - OpenSSH per-connection server daemon (10.0.0.1:37742). Oct 28 23:44:15.699869 systemd-logind[1510]: Removed session 5. Oct 28 23:44:15.751787 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 37742 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:15.752987 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:15.757277 systemd-logind[1510]: New session 6 of user core. Oct 28 23:44:15.773607 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 23:44:15.826503 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 23:44:15.826839 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:44:15.956773 sudo[1706]: pam_unix(sudo:session): session closed for user root Oct 28 23:44:15.961827 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 23:44:15.962082 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:44:15.973917 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:44:16.015008 augenrules[1728]: No rules Oct 28 23:44:16.015619 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:44:16.015810 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:44:16.017128 sudo[1705]: pam_unix(sudo:session): session closed for user root Oct 28 23:44:16.018664 sshd[1704]: Connection closed by 10.0.0.1 port 37742 Oct 28 23:44:16.018925 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:16.028294 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:37742.service: Deactivated successfully. Oct 28 23:44:16.029608 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 23:44:16.031824 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Oct 28 23:44:16.032656 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:37752.service - OpenSSH per-connection server daemon (10.0.0.1:37752). Oct 28 23:44:16.033728 systemd-logind[1510]: Removed session 6. Oct 28 23:44:16.088902 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 37752 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:44:16.090008 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:44:16.093779 systemd-logind[1510]: New session 7 of user core. Oct 28 23:44:16.107584 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 23:44:16.157096 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 23:44:16.157347 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:44:16.431530 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 23:44:16.449801 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 23:44:16.665756 dockerd[1762]: time="2025-10-28T23:44:16.665683952Z" level=info msg="Starting up" Oct 28 23:44:16.666731 dockerd[1762]: time="2025-10-28T23:44:16.666702898Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 23:44:16.678893 dockerd[1762]: time="2025-10-28T23:44:16.678844645Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 23:44:16.734212 dockerd[1762]: time="2025-10-28T23:44:16.734096532Z" level=info msg="Loading containers: start." Oct 28 23:44:16.746480 kernel: Initializing XFRM netlink socket Oct 28 23:44:16.946996 systemd-networkd[1432]: docker0: Link UP Oct 28 23:44:16.950165 dockerd[1762]: time="2025-10-28T23:44:16.950119484Z" level=info msg="Loading containers: done." Oct 28 23:44:16.961843 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2538394714-merged.mount: Deactivated successfully. Oct 28 23:44:16.963089 dockerd[1762]: time="2025-10-28T23:44:16.963027475Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 23:44:16.963163 dockerd[1762]: time="2025-10-28T23:44:16.963106657Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 23:44:16.963202 dockerd[1762]: time="2025-10-28T23:44:16.963183165Z" level=info msg="Initializing buildkit" Oct 28 23:44:16.994697 dockerd[1762]: time="2025-10-28T23:44:16.994585918Z" level=info msg="Completed buildkit initialization" Oct 28 23:44:17.005694 dockerd[1762]: time="2025-10-28T23:44:17.005651210Z" level=info msg="Daemon has completed initialization" Oct 28 23:44:17.005809 dockerd[1762]: time="2025-10-28T23:44:17.005757833Z" level=info msg="API listen on /run/docker.sock" Oct 28 23:44:17.005900 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 23:44:17.480210 containerd[1531]: time="2025-10-28T23:44:17.480101151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 28 23:44:18.207508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036265556.mount: Deactivated successfully. Oct 28 23:44:19.462893 containerd[1531]: time="2025-10-28T23:44:19.462848616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:19.463822 containerd[1531]: time="2025-10-28T23:44:19.463785604Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 28 23:44:19.465096 containerd[1531]: time="2025-10-28T23:44:19.464710135Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:19.467896 containerd[1531]: time="2025-10-28T23:44:19.467873408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:19.468997 containerd[1531]: time="2025-10-28T23:44:19.468972536Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.988831518s" Oct 28 23:44:19.469059 containerd[1531]: time="2025-10-28T23:44:19.469004557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 28 23:44:19.469552 containerd[1531]: time="2025-10-28T23:44:19.469524356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 28 23:44:20.512967 containerd[1531]: time="2025-10-28T23:44:20.512917810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:20.514006 containerd[1531]: time="2025-10-28T23:44:20.513979653Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 28 23:44:20.515465 containerd[1531]: time="2025-10-28T23:44:20.515241213Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:20.519217 containerd[1531]: time="2025-10-28T23:44:20.519187711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:20.519877 containerd[1531]: time="2025-10-28T23:44:20.519845887Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.050282361s" Oct 28 23:44:20.519877 containerd[1531]: time="2025-10-28T23:44:20.519876078Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 28 23:44:20.521080 containerd[1531]: time="2025-10-28T23:44:20.521043949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 28 23:44:21.001206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 23:44:21.003188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:21.168608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:21.189946 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:44:21.230300 kubelet[2051]: E1028 23:44:21.230223 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:44:21.233005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:44:21.233140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:44:21.233464 systemd[1]: kubelet.service: Consumed 143ms CPU time, 105.2M memory peak. Oct 28 23:44:21.450926 containerd[1531]: time="2025-10-28T23:44:21.450816696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:21.451747 containerd[1531]: time="2025-10-28T23:44:21.451716423Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 28 23:44:21.452665 containerd[1531]: time="2025-10-28T23:44:21.452617189Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:21.455120 containerd[1531]: time="2025-10-28T23:44:21.455090690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:21.456198 containerd[1531]: time="2025-10-28T23:44:21.456166049Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 935.084837ms" Oct 28 23:44:21.456239 containerd[1531]: time="2025-10-28T23:44:21.456202078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 28 23:44:21.456731 containerd[1531]: time="2025-10-28T23:44:21.456707483Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 28 23:44:22.651580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480109436.mount: Deactivated successfully. Oct 28 23:44:22.943615 containerd[1531]: time="2025-10-28T23:44:22.943496407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:22.944174 containerd[1531]: time="2025-10-28T23:44:22.944142288Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 28 23:44:22.945094 containerd[1531]: time="2025-10-28T23:44:22.945070179Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:22.947928 containerd[1531]: time="2025-10-28T23:44:22.947894084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:22.948577 containerd[1531]: time="2025-10-28T23:44:22.948546597Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.491810033s" Oct 28 23:44:22.948613 containerd[1531]: time="2025-10-28T23:44:22.948593539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 28 23:44:22.949165 containerd[1531]: time="2025-10-28T23:44:22.949138984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 28 23:44:23.520678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719751557.mount: Deactivated successfully. Oct 28 23:44:24.494587 containerd[1531]: time="2025-10-28T23:44:24.494516516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.495405 containerd[1531]: time="2025-10-28T23:44:24.495366271Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 28 23:44:24.497256 containerd[1531]: time="2025-10-28T23:44:24.497206847Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.500998 containerd[1531]: time="2025-10-28T23:44:24.500945666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.502863 containerd[1531]: time="2025-10-28T23:44:24.502821689Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.553645228s" Oct 28 23:44:24.503079 containerd[1531]: time="2025-10-28T23:44:24.502972666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 28 23:44:24.503636 containerd[1531]: time="2025-10-28T23:44:24.503432190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 28 23:44:24.966148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3053591927.mount: Deactivated successfully. Oct 28 23:44:24.971722 containerd[1531]: time="2025-10-28T23:44:24.971671209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.972491 containerd[1531]: time="2025-10-28T23:44:24.972454507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 28 23:44:24.973243 containerd[1531]: time="2025-10-28T23:44:24.973208193Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.975733 containerd[1531]: time="2025-10-28T23:44:24.975685406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:24.976330 containerd[1531]: time="2025-10-28T23:44:24.976291712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 472.815803ms" Oct 28 23:44:24.976330 containerd[1531]: time="2025-10-28T23:44:24.976326519Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 28 23:44:24.976869 containerd[1531]: time="2025-10-28T23:44:24.976823089Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 28 23:44:28.331621 containerd[1531]: time="2025-10-28T23:44:28.331543783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:28.332119 containerd[1531]: time="2025-10-28T23:44:28.332070690Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 28 23:44:28.333188 containerd[1531]: time="2025-10-28T23:44:28.333151210Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:28.337102 containerd[1531]: time="2025-10-28T23:44:28.337057162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:28.338059 containerd[1531]: time="2025-10-28T23:44:28.337720754Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.360863175s" Oct 28 23:44:28.338059 containerd[1531]: time="2025-10-28T23:44:28.337749818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 28 23:44:31.483696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 28 23:44:31.485037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:31.654604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:31.667772 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:44:31.702588 kubelet[2202]: E1028 23:44:31.702534 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:44:31.704794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:44:31.704947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:44:31.705473 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.5M memory peak. Oct 28 23:44:33.265738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:33.266004 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.5M memory peak. Oct 28 23:44:33.268177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:33.291744 systemd[1]: Reload requested from client PID 2218 ('systemctl') (unit session-7.scope)... Oct 28 23:44:33.291760 systemd[1]: Reloading... Oct 28 23:44:33.362483 zram_generator::config[2264]: No configuration found. Oct 28 23:44:33.566418 systemd[1]: Reloading finished in 274 ms. Oct 28 23:44:33.609670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:33.611892 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 23:44:33.613536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:33.613593 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95.1M memory peak. Oct 28 23:44:33.614921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:33.756308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:33.770849 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:44:33.805369 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:44:33.805369 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:44:33.806002 kubelet[2309]: I1028 23:44:33.805939 2309 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:44:34.860247 kubelet[2309]: I1028 23:44:34.860189 2309 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 23:44:34.860247 kubelet[2309]: I1028 23:44:34.860227 2309 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:44:34.860247 kubelet[2309]: I1028 23:44:34.860272 2309 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 23:44:34.860685 kubelet[2309]: I1028 23:44:34.860281 2309 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:44:34.860685 kubelet[2309]: I1028 23:44:34.860543 2309 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:44:34.984264 kubelet[2309]: E1028 23:44:34.984202 2309 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 23:44:34.985032 kubelet[2309]: I1028 23:44:34.985004 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:44:34.990221 kubelet[2309]: I1028 23:44:34.990153 2309 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:44:34.993589 kubelet[2309]: I1028 23:44:34.993565 2309 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 23:44:34.993872 kubelet[2309]: I1028 23:44:34.993818 2309 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:44:34.994018 kubelet[2309]: I1028 23:44:34.993855 2309 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:44:34.994018 kubelet[2309]: I1028 23:44:34.994018 2309 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:44:34.994128 kubelet[2309]: I1028 23:44:34.994029 2309 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 23:44:34.994153 kubelet[2309]: I1028 23:44:34.994135 2309 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 23:44:34.996246 kubelet[2309]: I1028 23:44:34.996202 2309 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:44:34.997412 kubelet[2309]: I1028 23:44:34.997393 2309 kubelet.go:475] "Attempting to sync node with API server" Oct 28 23:44:34.997483 kubelet[2309]: I1028 23:44:34.997424 2309 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:44:34.997950 kubelet[2309]: I1028 23:44:34.997899 2309 kubelet.go:387] "Adding apiserver pod source" Oct 28 23:44:34.997950 kubelet[2309]: I1028 23:44:34.997925 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:44:34.998280 kubelet[2309]: E1028 23:44:34.998237 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 23:44:34.998637 kubelet[2309]: E1028 23:44:34.998609 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 23:44:34.999293 kubelet[2309]: I1028 23:44:34.999261 2309 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:44:35.000066 kubelet[2309]: I1028 23:44:35.000031 2309 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:44:35.000066 kubelet[2309]: I1028 23:44:35.000068 2309 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 23:44:35.000145 kubelet[2309]: W1028 23:44:35.000105 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 23:44:35.002905 kubelet[2309]: I1028 23:44:35.002865 2309 server.go:1262] "Started kubelet" Oct 28 23:44:35.003830 kubelet[2309]: I1028 23:44:35.003792 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:44:35.004018 kubelet[2309]: I1028 23:44:35.003978 2309 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:44:35.004513 kubelet[2309]: I1028 23:44:35.004471 2309 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:44:35.004621 kubelet[2309]: I1028 23:44:35.004607 2309 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 23:44:35.004905 kubelet[2309]: I1028 23:44:35.004885 2309 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:44:35.005267 kubelet[2309]: I1028 23:44:35.005245 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:44:35.008126 kubelet[2309]: I1028 23:44:35.006389 2309 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 23:44:35.008550 kubelet[2309]: I1028 23:44:35.006533 2309 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 23:44:35.008550 kubelet[2309]: E1028 23:44:35.006729 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:35.008627 kubelet[2309]: I1028 23:44:35.008601 2309 reconciler.go:29] "Reconciler: start to sync state" Oct 28 23:44:35.010564 kubelet[2309]: E1028 23:44:35.008653 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cc563c18fdb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 23:44:35.002809778 +0000 UTC m=+1.228965332,LastTimestamp:2025-10-28 23:44:35.002809778 +0000 UTC m=+1.228965332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 23:44:35.010923 kubelet[2309]: E1028 23:44:35.010774 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 23:44:35.010923 kubelet[2309]: E1028 23:44:35.010872 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Oct 28 23:44:35.012032 kubelet[2309]: I1028 23:44:35.012005 2309 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:44:35.012032 kubelet[2309]: I1028 23:44:35.012023 2309 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:44:35.012123 kubelet[2309]: I1028 23:44:35.012112 2309 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:44:35.014540 kubelet[2309]: I1028 23:44:35.014514 2309 server.go:310] "Adding debug handlers to kubelet server" Oct 28 23:44:35.027307 kubelet[2309]: I1028 23:44:35.027242 2309 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 23:44:35.027953 kubelet[2309]: I1028 23:44:35.027919 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:44:35.027953 kubelet[2309]: I1028 23:44:35.027942 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:44:35.028034 kubelet[2309]: I1028 23:44:35.027961 2309 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:44:35.029516 kubelet[2309]: I1028 23:44:35.029488 2309 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 23:44:35.029516 kubelet[2309]: I1028 23:44:35.029516 2309 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 23:44:35.029623 kubelet[2309]: I1028 23:44:35.029535 2309 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 23:44:35.029623 kubelet[2309]: E1028 23:44:35.029576 2309 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:44:35.029672 kubelet[2309]: I1028 23:44:35.029653 2309 policy_none.go:49] "None policy: Start" Oct 28 23:44:35.029672 kubelet[2309]: I1028 23:44:35.029670 2309 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 23:44:35.029709 kubelet[2309]: I1028 23:44:35.029682 2309 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 23:44:35.030833 kubelet[2309]: E1028 23:44:35.030659 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 23:44:35.031287 kubelet[2309]: I1028 23:44:35.031238 2309 policy_none.go:47] "Start" Oct 28 23:44:35.035970 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 23:44:35.048766 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 23:44:35.051932 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 23:44:35.069474 kubelet[2309]: E1028 23:44:35.069359 2309 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:44:35.069995 kubelet[2309]: I1028 23:44:35.069613 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:44:35.069995 kubelet[2309]: I1028 23:44:35.069629 2309 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:44:35.069995 kubelet[2309]: I1028 23:44:35.069887 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:44:35.070835 kubelet[2309]: E1028 23:44:35.070806 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:44:35.070905 kubelet[2309]: E1028 23:44:35.070848 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 23:44:35.140409 systemd[1]: Created slice kubepods-burstable-podfef681d7191ebf43d68af1cf01400c42.slice - libcontainer container kubepods-burstable-podfef681d7191ebf43d68af1cf01400c42.slice. Oct 28 23:44:35.154061 kubelet[2309]: E1028 23:44:35.154013 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:35.155419 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 28 23:44:35.171755 kubelet[2309]: I1028 23:44:35.171718 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:44:35.172223 kubelet[2309]: E1028 23:44:35.172180 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Oct 28 23:44:35.175907 kubelet[2309]: E1028 23:44:35.175862 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:35.178251 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 28 23:44:35.180136 kubelet[2309]: E1028 23:44:35.180095 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:35.209530 kubelet[2309]: I1028 23:44:35.209492 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:35.209582 kubelet[2309]: I1028 23:44:35.209530 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:35.209582 kubelet[2309]: I1028 23:44:35.209551 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:35.209582 kubelet[2309]: I1028 23:44:35.209567 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:44:35.209657 kubelet[2309]: I1028 23:44:35.209583 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:35.209657 kubelet[2309]: I1028 23:44:35.209597 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:35.209657 kubelet[2309]: I1028 23:44:35.209610 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:35.209657 kubelet[2309]: I1028 23:44:35.209625 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:35.209657 kubelet[2309]: I1028 23:44:35.209638 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:35.211777 kubelet[2309]: E1028 23:44:35.211743 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Oct 28 23:44:35.373783 kubelet[2309]: I1028 23:44:35.373737 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:44:35.374082 kubelet[2309]: E1028 23:44:35.374047 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Oct 28 23:44:35.456733 kubelet[2309]: E1028 23:44:35.456620 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:35.457730 containerd[1531]: time="2025-10-28T23:44:35.457667829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fef681d7191ebf43d68af1cf01400c42,Namespace:kube-system,Attempt:0,}" Oct 28 23:44:35.478168 kubelet[2309]: E1028 23:44:35.478128 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:35.478582 containerd[1531]: time="2025-10-28T23:44:35.478537801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 28 23:44:35.482318 kubelet[2309]: E1028 23:44:35.482289 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:35.482808 containerd[1531]: time="2025-10-28T23:44:35.482724529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 28 23:44:35.612369 kubelet[2309]: E1028 23:44:35.612319 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Oct 28 23:44:35.776197 kubelet[2309]: I1028 23:44:35.776082 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:44:35.776465 kubelet[2309]: E1028 23:44:35.776395 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Oct 28 23:44:35.999281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3315751297.mount: Deactivated successfully. Oct 28 23:44:36.006847 containerd[1531]: time="2025-10-28T23:44:36.006329121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:44:36.008775 containerd[1531]: time="2025-10-28T23:44:36.008751219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 28 23:44:36.009586 containerd[1531]: time="2025-10-28T23:44:36.009560305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:44:36.010963 containerd[1531]: time="2025-10-28T23:44:36.010922565Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:44:36.011386 containerd[1531]: time="2025-10-28T23:44:36.011364601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 23:44:36.012132 containerd[1531]: time="2025-10-28T23:44:36.012086143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:44:36.013109 containerd[1531]: time="2025-10-28T23:44:36.012664633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 28 23:44:36.014741 containerd[1531]: time="2025-10-28T23:44:36.014702684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:44:36.016605 containerd[1531]: time="2025-10-28T23:44:36.016571128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 531.977407ms" Oct 28 23:44:36.017122 containerd[1531]: time="2025-10-28T23:44:36.017093389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 536.859677ms" Oct 28 23:44:36.017818 containerd[1531]: time="2025-10-28T23:44:36.017592573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.156468ms" Oct 28 23:44:36.020872 kubelet[2309]: E1028 23:44:36.020840 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 23:44:36.041069 containerd[1531]: time="2025-10-28T23:44:36.039853929Z" level=info msg="connecting to shim eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789" address="unix:///run/containerd/s/72405b33929e4f5d18ebc476db9dc6103720d67322e8042cad8f4c9a9dae5c92" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:44:36.046409 containerd[1531]: time="2025-10-28T23:44:36.046368727Z" level=info msg="connecting to shim 5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832" address="unix:///run/containerd/s/227f8e97c9bb6a3465f86e8462199f3369f1ef13d5b390a9b35d61fb5140bf04" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:44:36.048815 containerd[1531]: time="2025-10-28T23:44:36.048780148Z" level=info msg="connecting to shim 1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c" address="unix:///run/containerd/s/1f68fdea76335224d080d1e35499c4e154877414e0b1115dfa1956e7c6818946" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:44:36.066653 systemd[1]: Started cri-containerd-eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789.scope - libcontainer container eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789. Oct 28 23:44:36.071205 systemd[1]: Started cri-containerd-1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c.scope - libcontainer container 1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c. Oct 28 23:44:36.072239 systemd[1]: Started cri-containerd-5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832.scope - libcontainer container 5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832. Oct 28 23:44:36.112597 containerd[1531]: time="2025-10-28T23:44:36.112560828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c\"" Oct 28 23:44:36.113809 kubelet[2309]: E1028 23:44:36.113784 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:36.118835 containerd[1531]: time="2025-10-28T23:44:36.118796079Z" level=info msg="CreateContainer within sandbox \"1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 23:44:36.119945 containerd[1531]: time="2025-10-28T23:44:36.119896349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789\"" Oct 28 23:44:36.121143 kubelet[2309]: E1028 23:44:36.121113 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:36.122261 containerd[1531]: time="2025-10-28T23:44:36.122230504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fef681d7191ebf43d68af1cf01400c42,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832\"" Oct 28 23:44:36.123327 kubelet[2309]: E1028 23:44:36.123304 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:36.124819 containerd[1531]: time="2025-10-28T23:44:36.124774139Z" level=info msg="CreateContainer within sandbox \"eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 23:44:36.128125 containerd[1531]: time="2025-10-28T23:44:36.127815639Z" level=info msg="CreateContainer within sandbox \"5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 23:44:36.129569 containerd[1531]: time="2025-10-28T23:44:36.129538511Z" level=info msg="Container e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:44:36.137328 containerd[1531]: time="2025-10-28T23:44:36.137274156Z" level=info msg="CreateContainer within sandbox \"1fe59870a061f3196221f37688fae81dcffab066ab64f1e5b9bde2a70983b60c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970\"" Oct 28 23:44:36.138474 containerd[1531]: time="2025-10-28T23:44:36.138425657Z" level=info msg="StartContainer for \"e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970\"" Oct 28 23:44:36.140573 containerd[1531]: time="2025-10-28T23:44:36.140546012Z" level=info msg="Container 51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:44:36.140794 containerd[1531]: time="2025-10-28T23:44:36.140752893Z" level=info msg="connecting to shim e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970" address="unix:///run/containerd/s/1f68fdea76335224d080d1e35499c4e154877414e0b1115dfa1956e7c6818946" protocol=ttrpc version=3 Oct 28 23:44:36.142813 containerd[1531]: time="2025-10-28T23:44:36.142774068Z" level=info msg="Container 2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:44:36.150926 containerd[1531]: time="2025-10-28T23:44:36.150892600Z" level=info msg="CreateContainer within sandbox \"eed63f59f3a947be5b616c1912467e94eed5fd0bde8385fab467ea8b586b3789\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e\"" Oct 28 23:44:36.151723 containerd[1531]: time="2025-10-28T23:44:36.151491006Z" level=info msg="StartContainer for \"51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e\"" Oct 28 23:44:36.152028 containerd[1531]: time="2025-10-28T23:44:36.151997189Z" level=info msg="CreateContainer within sandbox \"5e7bf0697c274864fd5a8c853d1f50d32e692985c226f3cab73c3fa29e2f3832\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1\"" Oct 28 23:44:36.152694 containerd[1531]: time="2025-10-28T23:44:36.152663302Z" level=info msg="StartContainer for \"2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1\"" Oct 28 23:44:36.153062 containerd[1531]: time="2025-10-28T23:44:36.153015755Z" level=info msg="connecting to shim 51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e" address="unix:///run/containerd/s/72405b33929e4f5d18ebc476db9dc6103720d67322e8042cad8f4c9a9dae5c92" protocol=ttrpc version=3 Oct 28 23:44:36.154472 containerd[1531]: time="2025-10-28T23:44:36.154428046Z" level=info msg="connecting to shim 2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1" address="unix:///run/containerd/s/227f8e97c9bb6a3465f86e8462199f3369f1ef13d5b390a9b35d61fb5140bf04" protocol=ttrpc version=3 Oct 28 23:44:36.162605 systemd[1]: Started cri-containerd-e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970.scope - libcontainer container e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970. Oct 28 23:44:36.179688 systemd[1]: Started cri-containerd-51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e.scope - libcontainer container 51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e. Oct 28 23:44:36.183521 systemd[1]: Started cri-containerd-2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1.scope - libcontainer container 2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1. Oct 28 23:44:36.228070 containerd[1531]: time="2025-10-28T23:44:36.228018416Z" level=info msg="StartContainer for \"e3bb46942584c9a00970dc8dfabc2cea9c7b26499bb4391f5a806db6576a9970\" returns successfully" Oct 28 23:44:36.228337 containerd[1531]: time="2025-10-28T23:44:36.228104160Z" level=info msg="StartContainer for \"2cd19be491e27d8d87fc6341c193a0c378d83cee573610559f1fc867010baea1\" returns successfully" Oct 28 23:44:36.233608 containerd[1531]: time="2025-10-28T23:44:36.233571997Z" level=info msg="StartContainer for \"51ed0de69c220603a8a56057005c8df9250a47fde195aa4c2d3f92910a38cb2e\" returns successfully" Oct 28 23:44:36.282822 kubelet[2309]: E1028 23:44:36.282769 2309 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 23:44:36.578413 kubelet[2309]: I1028 23:44:36.578384 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:44:37.048580 kubelet[2309]: E1028 23:44:37.047884 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:37.048580 kubelet[2309]: E1028 23:44:37.048280 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:37.053086 kubelet[2309]: E1028 23:44:37.053033 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:37.053483 kubelet[2309]: E1028 23:44:37.053202 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:37.057105 kubelet[2309]: E1028 23:44:37.056800 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:37.057459 kubelet[2309]: E1028 23:44:37.057346 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:37.679805 kubelet[2309]: E1028 23:44:37.679768 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 23:44:37.857177 kubelet[2309]: I1028 23:44:37.857057 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:44:37.857177 kubelet[2309]: E1028 23:44:37.857098 2309 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 23:44:37.869988 kubelet[2309]: E1028 23:44:37.869958 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:37.970776 kubelet[2309]: E1028 23:44:37.970500 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.057645 kubelet[2309]: E1028 23:44:38.057619 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:38.058043 kubelet[2309]: E1028 23:44:38.057742 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:38.059083 kubelet[2309]: E1028 23:44:38.058613 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:38.059315 kubelet[2309]: E1028 23:44:38.058716 2309 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:44:38.059315 kubelet[2309]: E1028 23:44:38.059295 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:38.059452 kubelet[2309]: E1028 23:44:38.059401 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:38.071225 kubelet[2309]: E1028 23:44:38.071191 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.174316 kubelet[2309]: E1028 23:44:38.174253 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.274680 kubelet[2309]: E1028 23:44:38.274571 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.374714 kubelet[2309]: E1028 23:44:38.374665 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.475544 kubelet[2309]: E1028 23:44:38.475492 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.576133 kubelet[2309]: E1028 23:44:38.575881 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.676678 kubelet[2309]: E1028 23:44:38.676642 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.778005 kubelet[2309]: E1028 23:44:38.777964 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.879239 kubelet[2309]: E1028 23:44:38.879119 2309 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:38.907371 kubelet[2309]: I1028 23:44:38.907320 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:38.920794 kubelet[2309]: I1028 23:44:38.920490 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:38.925760 kubelet[2309]: I1028 23:44:38.925722 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:44:39.002811 kubelet[2309]: I1028 23:44:39.002725 2309 apiserver.go:52] "Watching apiserver" Oct 28 23:44:39.008642 kubelet[2309]: I1028 23:44:39.008588 2309 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 23:44:39.058990 kubelet[2309]: E1028 23:44:39.058954 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:39.058990 kubelet[2309]: E1028 23:44:39.058977 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:39.059367 kubelet[2309]: I1028 23:44:39.059035 2309 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:39.064151 kubelet[2309]: E1028 23:44:39.064114 2309 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:39.064346 kubelet[2309]: E1028 23:44:39.064319 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:40.060419 kubelet[2309]: E1028 23:44:40.060379 2309 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:40.200810 systemd[1]: Reload requested from client PID 2599 ('systemctl') (unit session-7.scope)... Oct 28 23:44:40.200828 systemd[1]: Reloading... Oct 28 23:44:40.266471 zram_generator::config[2642]: No configuration found. Oct 28 23:44:40.437373 systemd[1]: Reloading finished in 236 ms. Oct 28 23:44:40.471544 kubelet[2309]: I1028 23:44:40.471506 2309 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:44:40.471719 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:40.483810 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 23:44:40.484116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:40.484182 systemd[1]: kubelet.service: Consumed 1.481s CPU time, 124M memory peak. Oct 28 23:44:40.485978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:44:40.630091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:44:40.633868 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:44:40.667515 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:44:40.667515 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:44:40.667826 kubelet[2684]: I1028 23:44:40.667558 2684 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:44:40.675246 kubelet[2684]: I1028 23:44:40.675189 2684 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 28 23:44:40.675246 kubelet[2684]: I1028 23:44:40.675214 2684 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:44:40.675246 kubelet[2684]: I1028 23:44:40.675248 2684 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 28 23:44:40.675246 kubelet[2684]: I1028 23:44:40.675255 2684 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:44:40.675690 kubelet[2684]: I1028 23:44:40.675434 2684 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:44:40.676816 kubelet[2684]: I1028 23:44:40.676790 2684 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 23:44:40.680374 kubelet[2684]: I1028 23:44:40.680067 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:44:40.683950 kubelet[2684]: I1028 23:44:40.683842 2684 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:44:40.686628 kubelet[2684]: I1028 23:44:40.686578 2684 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 28 23:44:40.686907 kubelet[2684]: I1028 23:44:40.686861 2684 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:44:40.688579 kubelet[2684]: I1028 23:44:40.686889 2684 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:44:40.688579 kubelet[2684]: I1028 23:44:40.687320 2684 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:44:40.688579 kubelet[2684]: I1028 23:44:40.687333 2684 container_manager_linux.go:306] "Creating device plugin manager" Oct 28 23:44:40.688579 kubelet[2684]: I1028 23:44:40.687368 2684 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 28 23:44:40.688579 kubelet[2684]: I1028 23:44:40.688253 2684 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:44:40.688763 kubelet[2684]: I1028 23:44:40.688412 2684 kubelet.go:475] "Attempting to sync node with API server" Oct 28 23:44:40.688763 kubelet[2684]: I1028 23:44:40.688428 2684 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:44:40.688763 kubelet[2684]: I1028 23:44:40.688470 2684 kubelet.go:387] "Adding apiserver pod source" Oct 28 23:44:40.688763 kubelet[2684]: I1028 23:44:40.688483 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:44:40.691729 kubelet[2684]: I1028 23:44:40.691696 2684 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:44:40.692266 kubelet[2684]: I1028 23:44:40.692244 2684 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:44:40.692318 kubelet[2684]: I1028 23:44:40.692278 2684 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 28 23:44:40.694857 kubelet[2684]: I1028 23:44:40.694777 2684 server.go:1262] "Started kubelet" Oct 28 23:44:40.695148 kubelet[2684]: I1028 23:44:40.695113 2684 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:44:40.696428 kubelet[2684]: I1028 23:44:40.696371 2684 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:44:40.696502 kubelet[2684]: I1028 23:44:40.696454 2684 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 28 23:44:40.699459 kubelet[2684]: I1028 23:44:40.696653 2684 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:44:40.699459 kubelet[2684]: I1028 23:44:40.696946 2684 server.go:310] "Adding debug handlers to kubelet server" Oct 28 23:44:40.699459 kubelet[2684]: I1028 23:44:40.698220 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:44:40.700900 kubelet[2684]: I1028 23:44:40.700863 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:44:40.702330 kubelet[2684]: I1028 23:44:40.702285 2684 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 28 23:44:40.702406 kubelet[2684]: I1028 23:44:40.702377 2684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 28 23:44:40.702536 kubelet[2684]: I1028 23:44:40.702518 2684 reconciler.go:29] "Reconciler: start to sync state" Oct 28 23:44:40.708609 kubelet[2684]: I1028 23:44:40.708576 2684 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:44:40.708609 kubelet[2684]: I1028 23:44:40.708598 2684 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:44:40.708774 kubelet[2684]: I1028 23:44:40.708676 2684 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:44:40.709917 kubelet[2684]: E1028 23:44:40.709878 2684 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:44:40.710029 kubelet[2684]: E1028 23:44:40.710002 2684 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 23:44:40.720314 kubelet[2684]: I1028 23:44:40.720164 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 28 23:44:40.721335 kubelet[2684]: I1028 23:44:40.721314 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 28 23:44:40.721432 kubelet[2684]: I1028 23:44:40.721421 2684 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 28 23:44:40.721516 kubelet[2684]: I1028 23:44:40.721506 2684 kubelet.go:2427] "Starting kubelet main sync loop" Oct 28 23:44:40.721618 kubelet[2684]: E1028 23:44:40.721600 2684 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:44:40.741297 kubelet[2684]: I1028 23:44:40.741265 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:44:40.741297 kubelet[2684]: I1028 23:44:40.741303 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:44:40.741429 kubelet[2684]: I1028 23:44:40.741325 2684 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:44:40.741475 kubelet[2684]: I1028 23:44:40.741467 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 23:44:40.741552 kubelet[2684]: I1028 23:44:40.741478 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 23:44:40.741552 kubelet[2684]: I1028 23:44:40.741493 2684 policy_none.go:49] "None policy: Start" Oct 28 23:44:40.741552 kubelet[2684]: I1028 23:44:40.741501 2684 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 28 23:44:40.741552 kubelet[2684]: I1028 23:44:40.741509 2684 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 28 23:44:40.741669 kubelet[2684]: I1028 23:44:40.741601 2684 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 28 23:44:40.741669 kubelet[2684]: I1028 23:44:40.741609 2684 policy_none.go:47] "Start" Oct 28 23:44:40.746915 kubelet[2684]: E1028 23:44:40.746866 2684 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:44:40.747084 kubelet[2684]: I1028 23:44:40.747066 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:44:40.747111 kubelet[2684]: I1028 23:44:40.747084 2684 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:44:40.747752 kubelet[2684]: I1028 23:44:40.747716 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:44:40.748696 kubelet[2684]: E1028 23:44:40.748628 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:44:40.822607 kubelet[2684]: I1028 23:44:40.822533 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:40.822718 kubelet[2684]: I1028 23:44:40.822674 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.822718 kubelet[2684]: I1028 23:44:40.822534 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:44:40.829117 kubelet[2684]: E1028 23:44:40.828948 2684 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.829117 kubelet[2684]: E1028 23:44:40.829048 2684 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:40.829117 kubelet[2684]: E1028 23:44:40.829055 2684 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 23:44:40.851147 kubelet[2684]: I1028 23:44:40.851107 2684 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:44:40.857243 kubelet[2684]: I1028 23:44:40.857215 2684 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 23:44:40.857350 kubelet[2684]: I1028 23:44:40.857336 2684 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:44:40.902801 kubelet[2684]: I1028 23:44:40.902749 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.902801 kubelet[2684]: I1028 23:44:40.902791 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.902801 kubelet[2684]: I1028 23:44:40.902810 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.902976 kubelet[2684]: I1028 23:44:40.902827 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:44:40.902976 kubelet[2684]: I1028 23:44:40.902844 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:40.902976 kubelet[2684]: I1028 23:44:40.902880 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:40.902976 kubelet[2684]: I1028 23:44:40.902920 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:40.902976 kubelet[2684]: I1028 23:44:40.902938 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fef681d7191ebf43d68af1cf01400c42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fef681d7191ebf43d68af1cf01400c42\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:40.903085 kubelet[2684]: I1028 23:44:40.902954 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:44:41.130107 kubelet[2684]: E1028 23:44:41.129974 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.130107 kubelet[2684]: E1028 23:44:41.130026 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.130240 kubelet[2684]: E1028 23:44:41.130180 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.691265 kubelet[2684]: I1028 23:44:41.691152 2684 apiserver.go:52] "Watching apiserver" Oct 28 23:44:41.703454 kubelet[2684]: I1028 23:44:41.702560 2684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 28 23:44:41.731712 kubelet[2684]: I1028 23:44:41.731610 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.731591577 podStartE2EDuration="3.731591577s" podCreationTimestamp="2025-10-28 23:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:44:41.722531008 +0000 UTC m=+1.085212802" watchObservedRunningTime="2025-10-28 23:44:41.731591577 +0000 UTC m=+1.094273371" Oct 28 23:44:41.731890 kubelet[2684]: I1028 23:44:41.731783 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.73177793 podStartE2EDuration="3.73177793s" podCreationTimestamp="2025-10-28 23:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:44:41.731564738 +0000 UTC m=+1.094246532" watchObservedRunningTime="2025-10-28 23:44:41.73177793 +0000 UTC m=+1.094459724" Oct 28 23:44:41.732670 kubelet[2684]: I1028 23:44:41.732639 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:41.732861 kubelet[2684]: E1028 23:44:41.732735 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.734845 kubelet[2684]: E1028 23:44:41.734800 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.738823 kubelet[2684]: E1028 23:44:41.738777 2684 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 23:44:41.740464 kubelet[2684]: E1028 23:44:41.739008 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:41.752283 kubelet[2684]: I1028 23:44:41.752211 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.752192589 podStartE2EDuration="3.752192589s" podCreationTimestamp="2025-10-28 23:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:44:41.741281244 +0000 UTC m=+1.103963038" watchObservedRunningTime="2025-10-28 23:44:41.752192589 +0000 UTC m=+1.114874383" Oct 28 23:44:42.734192 kubelet[2684]: E1028 23:44:42.733818 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:42.734192 kubelet[2684]: E1028 23:44:42.733914 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:43.735589 kubelet[2684]: E1028 23:44:43.735554 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:45.835479 kubelet[2684]: E1028 23:44:45.835326 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:46.161257 kubelet[2684]: E1028 23:44:46.161089 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:46.740286 kubelet[2684]: E1028 23:44:46.740256 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:47.623267 kubelet[2684]: I1028 23:44:47.623241 2684 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 23:44:47.623624 containerd[1531]: time="2025-10-28T23:44:47.623587645Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 23:44:47.623879 kubelet[2684]: I1028 23:44:47.623773 2684 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 23:44:48.812288 systemd[1]: Created slice kubepods-besteffort-podcb2234f1_3f54_476a_b2a8_a738989d386c.slice - libcontainer container kubepods-besteffort-podcb2234f1_3f54_476a_b2a8_a738989d386c.slice. Oct 28 23:44:48.852782 kubelet[2684]: I1028 23:44:48.852619 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb2234f1-3f54-476a-b2a8-a738989d386c-lib-modules\") pod \"kube-proxy-qt54t\" (UID: \"cb2234f1-3f54-476a-b2a8-a738989d386c\") " pod="kube-system/kube-proxy-qt54t" Oct 28 23:44:48.852782 kubelet[2684]: I1028 23:44:48.852661 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb2234f1-3f54-476a-b2a8-a738989d386c-kube-proxy\") pod \"kube-proxy-qt54t\" (UID: \"cb2234f1-3f54-476a-b2a8-a738989d386c\") " pod="kube-system/kube-proxy-qt54t" Oct 28 23:44:48.852782 kubelet[2684]: I1028 23:44:48.852677 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb2234f1-3f54-476a-b2a8-a738989d386c-xtables-lock\") pod \"kube-proxy-qt54t\" (UID: \"cb2234f1-3f54-476a-b2a8-a738989d386c\") " pod="kube-system/kube-proxy-qt54t" Oct 28 23:44:48.852782 kubelet[2684]: I1028 23:44:48.852691 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvtkm\" (UniqueName: \"kubernetes.io/projected/cb2234f1-3f54-476a-b2a8-a738989d386c-kube-api-access-qvtkm\") pod \"kube-proxy-qt54t\" (UID: \"cb2234f1-3f54-476a-b2a8-a738989d386c\") " pod="kube-system/kube-proxy-qt54t" Oct 28 23:44:48.879140 systemd[1]: Created slice kubepods-besteffort-podda5a71b7_4fe8_4136_8e73_54c94c9453ba.slice - libcontainer container kubepods-besteffort-podda5a71b7_4fe8_4136_8e73_54c94c9453ba.slice. Oct 28 23:44:48.953210 kubelet[2684]: I1028 23:44:48.953160 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvjk\" (UniqueName: \"kubernetes.io/projected/da5a71b7-4fe8-4136-8e73-54c94c9453ba-kube-api-access-7xvjk\") pod \"tigera-operator-65cdcdfd6d-kllmj\" (UID: \"da5a71b7-4fe8-4136-8e73-54c94c9453ba\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-kllmj" Oct 28 23:44:48.953557 kubelet[2684]: I1028 23:44:48.953523 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da5a71b7-4fe8-4136-8e73-54c94c9453ba-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-kllmj\" (UID: \"da5a71b7-4fe8-4136-8e73-54c94c9453ba\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-kllmj" Oct 28 23:44:49.127391 kubelet[2684]: E1028 23:44:49.127080 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:49.128360 containerd[1531]: time="2025-10-28T23:44:49.128279823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt54t,Uid:cb2234f1-3f54-476a-b2a8-a738989d386c,Namespace:kube-system,Attempt:0,}" Oct 28 23:44:49.144012 containerd[1531]: time="2025-10-28T23:44:49.143965638Z" level=info msg="connecting to shim 8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294" address="unix:///run/containerd/s/f9fa6d1e0786c1cce0816330af43dc3e198865a73112e352d8b36282be672628" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:44:49.170580 systemd[1]: Started cri-containerd-8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294.scope - libcontainer container 8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294. Oct 28 23:44:49.184361 containerd[1531]: time="2025-10-28T23:44:49.184324149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-kllmj,Uid:da5a71b7-4fe8-4136-8e73-54c94c9453ba,Namespace:tigera-operator,Attempt:0,}" Oct 28 23:44:49.191808 containerd[1531]: time="2025-10-28T23:44:49.191698746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qt54t,Uid:cb2234f1-3f54-476a-b2a8-a738989d386c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294\"" Oct 28 23:44:49.192770 kubelet[2684]: E1028 23:44:49.192745 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:49.199568 containerd[1531]: time="2025-10-28T23:44:49.199533134Z" level=info msg="CreateContainer within sandbox \"8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 23:44:49.204420 containerd[1531]: time="2025-10-28T23:44:49.203914837Z" level=info msg="connecting to shim d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7" address="unix:///run/containerd/s/2c1c27aed43138e0101ac33d83793c07cc4f8e0611111ee546c98f7589285fa4" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:44:49.210083 containerd[1531]: time="2025-10-28T23:44:49.210040862Z" level=info msg="Container fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:44:49.232643 systemd[1]: Started cri-containerd-d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7.scope - libcontainer container d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7. Oct 28 23:44:49.355945 containerd[1531]: time="2025-10-28T23:44:49.355865971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-kllmj,Uid:da5a71b7-4fe8-4136-8e73-54c94c9453ba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7\"" Oct 28 23:44:49.357509 containerd[1531]: time="2025-10-28T23:44:49.357484455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 23:44:49.358860 containerd[1531]: time="2025-10-28T23:44:49.357574413Z" level=info msg="CreateContainer within sandbox \"8e464a2ddb61e15db87438e3e0fc6eb860cfef95984dd56ae7ce1e7316fa6294\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e\"" Oct 28 23:44:49.359211 containerd[1531]: time="2025-10-28T23:44:49.359172258Z" level=info msg="StartContainer for \"fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e\"" Oct 28 23:44:49.360544 containerd[1531]: time="2025-10-28T23:44:49.360518788Z" level=info msg="connecting to shim fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e" address="unix:///run/containerd/s/f9fa6d1e0786c1cce0816330af43dc3e198865a73112e352d8b36282be672628" protocol=ttrpc version=3 Oct 28 23:44:49.390610 systemd[1]: Started cri-containerd-fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e.scope - libcontainer container fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e. Oct 28 23:44:49.423338 containerd[1531]: time="2025-10-28T23:44:49.423303085Z" level=info msg="StartContainer for \"fb1a922828d3630485bc05e24b37721e41f06ef51e5b720d81ce7b5c59977c1e\" returns successfully" Oct 28 23:44:49.749576 kubelet[2684]: E1028 23:44:49.748420 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:50.484669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88766231.mount: Deactivated successfully. Oct 28 23:44:50.924271 kubelet[2684]: E1028 23:44:50.924078 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:50.943545 kubelet[2684]: I1028 23:44:50.943428 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qt54t" podStartSLOduration=2.9434054769999998 podStartE2EDuration="2.943405477s" podCreationTimestamp="2025-10-28 23:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:44:49.760905889 +0000 UTC m=+9.123587683" watchObservedRunningTime="2025-10-28 23:44:50.943405477 +0000 UTC m=+10.306087231" Oct 28 23:44:51.598186 containerd[1531]: time="2025-10-28T23:44:51.598124119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:51.598729 containerd[1531]: time="2025-10-28T23:44:51.598690508Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 28 23:44:51.599493 containerd[1531]: time="2025-10-28T23:44:51.599461173Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:51.601817 containerd[1531]: time="2025-10-28T23:44:51.601634850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:44:51.603391 containerd[1531]: time="2025-10-28T23:44:51.603349416Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.245698805s" Oct 28 23:44:51.603391 containerd[1531]: time="2025-10-28T23:44:51.603388055Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 28 23:44:51.608405 containerd[1531]: time="2025-10-28T23:44:51.608369076Z" level=info msg="CreateContainer within sandbox \"d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 23:44:51.616214 containerd[1531]: time="2025-10-28T23:44:51.615633692Z" level=info msg="Container 528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:44:51.621069 containerd[1531]: time="2025-10-28T23:44:51.621010226Z" level=info msg="CreateContainer within sandbox \"d05b9547c456eb1ab2ddc47e90766f5be874c1077b972673f7107b24434523d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46\"" Oct 28 23:44:51.621585 containerd[1531]: time="2025-10-28T23:44:51.621551855Z" level=info msg="StartContainer for \"528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46\"" Oct 28 23:44:51.622483 containerd[1531]: time="2025-10-28T23:44:51.622431437Z" level=info msg="connecting to shim 528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46" address="unix:///run/containerd/s/2c1c27aed43138e0101ac33d83793c07cc4f8e0611111ee546c98f7589285fa4" protocol=ttrpc version=3 Oct 28 23:44:51.647613 systemd[1]: Started cri-containerd-528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46.scope - libcontainer container 528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46. Oct 28 23:44:51.672434 containerd[1531]: time="2025-10-28T23:44:51.672397047Z" level=info msg="StartContainer for \"528a8eb71b0b68d5f64e354ec767d9c6d812c1e9949edadb5c0c9329b852ba46\" returns successfully" Oct 28 23:44:51.753236 kubelet[2684]: E1028 23:44:51.753206 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:51.771243 kubelet[2684]: I1028 23:44:51.771186 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-kllmj" podStartSLOduration=1.5243373089999999 podStartE2EDuration="3.771170569s" podCreationTimestamp="2025-10-28 23:44:48 +0000 UTC" firstStartedPulling="2025-10-28 23:44:49.357228781 +0000 UTC m=+8.719910575" lastFinishedPulling="2025-10-28 23:44:51.604062081 +0000 UTC m=+10.966743835" observedRunningTime="2025-10-28 23:44:51.770907974 +0000 UTC m=+11.133589768" watchObservedRunningTime="2025-10-28 23:44:51.771170569 +0000 UTC m=+11.133852323" Oct 28 23:44:53.501209 update_engine[1517]: I20251028 23:44:53.501127 1517 update_attempter.cc:509] Updating boot flags... Oct 28 23:44:55.859451 kubelet[2684]: E1028 23:44:55.856452 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:56.764530 kubelet[2684]: E1028 23:44:56.764495 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:44:56.984487 sudo[1741]: pam_unix(sudo:session): session closed for user root Oct 28 23:44:56.987646 sshd[1740]: Connection closed by 10.0.0.1 port 37752 Oct 28 23:44:56.988473 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Oct 28 23:44:56.996142 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:37752.service: Deactivated successfully. Oct 28 23:44:57.000930 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 23:44:57.002869 systemd[1]: session-7.scope: Consumed 6.803s CPU time, 220.8M memory peak. Oct 28 23:44:57.006020 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Oct 28 23:44:57.007096 systemd-logind[1510]: Removed session 7. Oct 28 23:45:04.783508 systemd[1]: Created slice kubepods-besteffort-pod4df54b1a_0ee5_47ef_b377_6a5f2f50bec2.slice - libcontainer container kubepods-besteffort-pod4df54b1a_0ee5_47ef_b377_6a5f2f50bec2.slice. Oct 28 23:45:04.862541 kubelet[2684]: I1028 23:45:04.862502 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4df54b1a-0ee5-47ef-b377-6a5f2f50bec2-tigera-ca-bundle\") pod \"calico-typha-cc8f56cf5-dd655\" (UID: \"4df54b1a-0ee5-47ef-b377-6a5f2f50bec2\") " pod="calico-system/calico-typha-cc8f56cf5-dd655" Oct 28 23:45:04.862541 kubelet[2684]: I1028 23:45:04.862535 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4df54b1a-0ee5-47ef-b377-6a5f2f50bec2-typha-certs\") pod \"calico-typha-cc8f56cf5-dd655\" (UID: \"4df54b1a-0ee5-47ef-b377-6a5f2f50bec2\") " pod="calico-system/calico-typha-cc8f56cf5-dd655" Oct 28 23:45:04.862906 kubelet[2684]: I1028 23:45:04.862596 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdkn\" (UniqueName: \"kubernetes.io/projected/4df54b1a-0ee5-47ef-b377-6a5f2f50bec2-kube-api-access-gxdkn\") pod \"calico-typha-cc8f56cf5-dd655\" (UID: \"4df54b1a-0ee5-47ef-b377-6a5f2f50bec2\") " pod="calico-system/calico-typha-cc8f56cf5-dd655" Oct 28 23:45:04.995206 systemd[1]: Created slice kubepods-besteffort-pod4924b6d3_9c71_41fb_9831_c6db97fc0c98.slice - libcontainer container kubepods-besteffort-pod4924b6d3_9c71_41fb_9831_c6db97fc0c98.slice. Oct 28 23:45:05.064530 kubelet[2684]: I1028 23:45:05.064215 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-var-lib-calico\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064530 kubelet[2684]: I1028 23:45:05.064258 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-cni-net-dir\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064530 kubelet[2684]: I1028 23:45:05.064273 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4924b6d3-9c71-41fb-9831-c6db97fc0c98-node-certs\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064530 kubelet[2684]: I1028 23:45:05.064287 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-cni-log-dir\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064530 kubelet[2684]: I1028 23:45:05.064304 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-flexvol-driver-host\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064716 kubelet[2684]: I1028 23:45:05.064328 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-cni-bin-dir\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064716 kubelet[2684]: I1028 23:45:05.064341 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-var-run-calico\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064716 kubelet[2684]: I1028 23:45:05.064355 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hp2h\" (UniqueName: \"kubernetes.io/projected/4924b6d3-9c71-41fb-9831-c6db97fc0c98-kube-api-access-8hp2h\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064716 kubelet[2684]: I1028 23:45:05.064373 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-xtables-lock\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064716 kubelet[2684]: I1028 23:45:05.064387 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-lib-modules\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064813 kubelet[2684]: I1028 23:45:05.064402 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4924b6d3-9c71-41fb-9831-c6db97fc0c98-tigera-ca-bundle\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.064813 kubelet[2684]: I1028 23:45:05.064417 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4924b6d3-9c71-41fb-9831-c6db97fc0c98-policysync\") pod \"calico-node-zmxnq\" (UID: \"4924b6d3-9c71-41fb-9831-c6db97fc0c98\") " pod="calico-system/calico-node-zmxnq" Oct 28 23:45:05.089271 kubelet[2684]: E1028 23:45:05.089221 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:05.089745 containerd[1531]: time="2025-10-28T23:45:05.089711018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc8f56cf5-dd655,Uid:4df54b1a-0ee5-47ef-b377-6a5f2f50bec2,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:05.125866 containerd[1531]: time="2025-10-28T23:45:05.125820606Z" level=info msg="connecting to shim 8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03" address="unix:///run/containerd/s/aa8d2e153464d3bc1b4c645337308f8629be58d12ae5e67e2a9088fb56c5a72b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:05.163304 kubelet[2684]: E1028 23:45:05.162317 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:05.184557 kubelet[2684]: E1028 23:45:05.184514 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.184557 kubelet[2684]: W1028 23:45:05.184549 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.184697 kubelet[2684]: E1028 23:45:05.184571 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.185619 kubelet[2684]: E1028 23:45:05.185588 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.185619 kubelet[2684]: W1028 23:45:05.185608 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.185619 kubelet[2684]: E1028 23:45:05.185621 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.210661 systemd[1]: Started cri-containerd-8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03.scope - libcontainer container 8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03. Oct 28 23:45:05.246892 kubelet[2684]: E1028 23:45:05.246827 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.246892 kubelet[2684]: W1028 23:45:05.246849 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.246892 kubelet[2684]: E1028 23:45:05.246866 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.247383 kubelet[2684]: E1028 23:45:05.247062 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.247383 kubelet[2684]: W1028 23:45:05.247070 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.247383 kubelet[2684]: E1028 23:45:05.247107 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.247482 kubelet[2684]: E1028 23:45:05.247392 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.247482 kubelet[2684]: W1028 23:45:05.247402 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.247482 kubelet[2684]: E1028 23:45:05.247412 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.247903 kubelet[2684]: E1028 23:45:05.247883 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.247903 kubelet[2684]: W1028 23:45:05.247903 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.247978 kubelet[2684]: E1028 23:45:05.247918 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.248089 kubelet[2684]: E1028 23:45:05.248076 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.248089 kubelet[2684]: W1028 23:45:05.248088 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.248138 kubelet[2684]: E1028 23:45:05.248099 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.248362 kubelet[2684]: E1028 23:45:05.248341 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.248362 kubelet[2684]: W1028 23:45:05.248360 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.248474 kubelet[2684]: E1028 23:45:05.248370 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.248641 kubelet[2684]: E1028 23:45:05.248625 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.248641 kubelet[2684]: W1028 23:45:05.248639 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.248702 kubelet[2684]: E1028 23:45:05.248649 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.248847 kubelet[2684]: E1028 23:45:05.248834 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.248886 kubelet[2684]: W1028 23:45:05.248848 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.248886 kubelet[2684]: E1028 23:45:05.248858 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.249180 kubelet[2684]: E1028 23:45:05.249163 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.249216 kubelet[2684]: W1028 23:45:05.249180 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.249216 kubelet[2684]: E1028 23:45:05.249193 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.249486 kubelet[2684]: E1028 23:45:05.249469 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.249486 kubelet[2684]: W1028 23:45:05.249485 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.249559 kubelet[2684]: E1028 23:45:05.249496 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.249853 kubelet[2684]: E1028 23:45:05.249832 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.249853 kubelet[2684]: W1028 23:45:05.249852 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.249917 kubelet[2684]: E1028 23:45:05.249861 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.250063 kubelet[2684]: E1028 23:45:05.250047 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.250264 containerd[1531]: time="2025-10-28T23:45:05.250229645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cc8f56cf5-dd655,Uid:4df54b1a-0ee5-47ef-b377-6a5f2f50bec2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03\"" Oct 28 23:45:05.250406 kubelet[2684]: W1028 23:45:05.250064 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.250454 kubelet[2684]: E1028 23:45:05.250412 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.250679 kubelet[2684]: E1028 23:45:05.250665 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.250679 kubelet[2684]: W1028 23:45:05.250677 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.250778 kubelet[2684]: E1028 23:45:05.250687 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.250937 kubelet[2684]: E1028 23:45:05.250892 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:05.251027 kubelet[2684]: E1028 23:45:05.251010 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.251027 kubelet[2684]: W1028 23:45:05.251025 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.251075 kubelet[2684]: E1028 23:45:05.251035 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.251213 kubelet[2684]: E1028 23:45:05.251201 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.251238 kubelet[2684]: W1028 23:45:05.251213 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.251238 kubelet[2684]: E1028 23:45:05.251221 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.251550 kubelet[2684]: E1028 23:45:05.251530 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.251550 kubelet[2684]: W1028 23:45:05.251546 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.251644 kubelet[2684]: E1028 23:45:05.251556 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.252413 kubelet[2684]: E1028 23:45:05.252368 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.252413 kubelet[2684]: W1028 23:45:05.252384 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.252413 kubelet[2684]: E1028 23:45:05.252395 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.252825 containerd[1531]: time="2025-10-28T23:45:05.252794818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 23:45:05.252906 kubelet[2684]: E1028 23:45:05.252890 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.252906 kubelet[2684]: W1028 23:45:05.252904 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.252958 kubelet[2684]: E1028 23:45:05.252915 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.253273 kubelet[2684]: E1028 23:45:05.253256 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.253273 kubelet[2684]: W1028 23:45:05.253272 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.253466 kubelet[2684]: E1028 23:45:05.253285 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.253607 kubelet[2684]: E1028 23:45:05.253587 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.253607 kubelet[2684]: W1028 23:45:05.253603 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.253651 kubelet[2684]: E1028 23:45:05.253614 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.266885 kubelet[2684]: E1028 23:45:05.266840 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.266885 kubelet[2684]: W1028 23:45:05.266872 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.266885 kubelet[2684]: E1028 23:45:05.266889 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.267056 kubelet[2684]: I1028 23:45:05.266917 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/48b595fd-60f3-4e0e-96da-2d837a2764a7-kubelet-dir\") pod \"csi-node-driver-h4shv\" (UID: \"48b595fd-60f3-4e0e-96da-2d837a2764a7\") " pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:05.267080 kubelet[2684]: E1028 23:45:05.267072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.267101 kubelet[2684]: W1028 23:45:05.267080 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.267101 kubelet[2684]: E1028 23:45:05.267088 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.267140 kubelet[2684]: I1028 23:45:05.267107 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/48b595fd-60f3-4e0e-96da-2d837a2764a7-varrun\") pod \"csi-node-driver-h4shv\" (UID: \"48b595fd-60f3-4e0e-96da-2d837a2764a7\") " pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:05.267276 kubelet[2684]: E1028 23:45:05.267263 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.267276 kubelet[2684]: W1028 23:45:05.267275 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.267333 kubelet[2684]: E1028 23:45:05.267283 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.267333 kubelet[2684]: I1028 23:45:05.267300 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jf4m\" (UniqueName: \"kubernetes.io/projected/48b595fd-60f3-4e0e-96da-2d837a2764a7-kube-api-access-2jf4m\") pod \"csi-node-driver-h4shv\" (UID: \"48b595fd-60f3-4e0e-96da-2d837a2764a7\") " pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:05.267553 kubelet[2684]: E1028 23:45:05.267534 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.267602 kubelet[2684]: W1028 23:45:05.267551 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.267602 kubelet[2684]: E1028 23:45:05.267574 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.267648 kubelet[2684]: I1028 23:45:05.267600 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/48b595fd-60f3-4e0e-96da-2d837a2764a7-socket-dir\") pod \"csi-node-driver-h4shv\" (UID: \"48b595fd-60f3-4e0e-96da-2d837a2764a7\") " pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:05.267786 kubelet[2684]: E1028 23:45:05.267773 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.267786 kubelet[2684]: W1028 23:45:05.267785 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.267845 kubelet[2684]: E1028 23:45:05.267798 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.267845 kubelet[2684]: I1028 23:45:05.267817 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/48b595fd-60f3-4e0e-96da-2d837a2764a7-registration-dir\") pod \"csi-node-driver-h4shv\" (UID: \"48b595fd-60f3-4e0e-96da-2d837a2764a7\") " pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:05.268078 kubelet[2684]: E1028 23:45:05.267985 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.268078 kubelet[2684]: W1028 23:45:05.268023 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.268078 kubelet[2684]: E1028 23:45:05.268033 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.268291 kubelet[2684]: E1028 23:45:05.268208 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.268291 kubelet[2684]: W1028 23:45:05.268219 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.268291 kubelet[2684]: E1028 23:45:05.268227 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.268589 kubelet[2684]: E1028 23:45:05.268394 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.268589 kubelet[2684]: W1028 23:45:05.268413 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.268589 kubelet[2684]: E1028 23:45:05.268421 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.268755 kubelet[2684]: E1028 23:45:05.268737 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.268755 kubelet[2684]: W1028 23:45:05.268750 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.268800 kubelet[2684]: E1028 23:45:05.268759 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.268945 kubelet[2684]: E1028 23:45:05.268902 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.268976 kubelet[2684]: W1028 23:45:05.268944 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.268976 kubelet[2684]: E1028 23:45:05.268954 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.269169 kubelet[2684]: E1028 23:45:05.269156 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.269205 kubelet[2684]: W1028 23:45:05.269172 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.269205 kubelet[2684]: E1028 23:45:05.269181 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.269324 kubelet[2684]: E1028 23:45:05.269311 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.269324 kubelet[2684]: W1028 23:45:05.269321 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.269392 kubelet[2684]: E1028 23:45:05.269337 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.269511 kubelet[2684]: E1028 23:45:05.269499 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.269511 kubelet[2684]: W1028 23:45:05.269510 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.269605 kubelet[2684]: E1028 23:45:05.269519 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.269717 kubelet[2684]: E1028 23:45:05.269702 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.269717 kubelet[2684]: W1028 23:45:05.269714 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.269779 kubelet[2684]: E1028 23:45:05.269723 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.269913 kubelet[2684]: E1028 23:45:05.269902 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.269979 kubelet[2684]: W1028 23:45:05.269914 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.270001 kubelet[2684]: E1028 23:45:05.269984 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.302552 kubelet[2684]: E1028 23:45:05.302514 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:05.303090 containerd[1531]: time="2025-10-28T23:45:05.303055981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmxnq,Uid:4924b6d3-9c71-41fb-9831-c6db97fc0c98,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:05.329255 containerd[1531]: time="2025-10-28T23:45:05.327027694Z" level=info msg="connecting to shim 9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771" address="unix:///run/containerd/s/8b1f4b8fbca52a6f48a5fafa6d8e56d82545349c6d66a48a1ac4465fcd5a2c8e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:05.356624 systemd[1]: Started cri-containerd-9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771.scope - libcontainer container 9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771. Oct 28 23:45:05.369166 kubelet[2684]: E1028 23:45:05.369108 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.369166 kubelet[2684]: W1028 23:45:05.369163 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.369368 kubelet[2684]: E1028 23:45:05.369183 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.369684 kubelet[2684]: E1028 23:45:05.369668 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.369684 kubelet[2684]: W1028 23:45:05.369683 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.369741 kubelet[2684]: E1028 23:45:05.369695 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.369962 kubelet[2684]: E1028 23:45:05.369939 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.369962 kubelet[2684]: W1028 23:45:05.369955 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.370032 kubelet[2684]: E1028 23:45:05.369968 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.370559 kubelet[2684]: E1028 23:45:05.370537 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.370559 kubelet[2684]: W1028 23:45:05.370556 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.370637 kubelet[2684]: E1028 23:45:05.370569 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.371324 kubelet[2684]: E1028 23:45:05.371308 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.371324 kubelet[2684]: W1028 23:45:05.371322 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.371502 kubelet[2684]: E1028 23:45:05.371332 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.371577 kubelet[2684]: E1028 23:45:05.371564 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.371577 kubelet[2684]: W1028 23:45:05.371577 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.371678 kubelet[2684]: E1028 23:45:05.371587 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.371746 kubelet[2684]: E1028 23:45:05.371735 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.371777 kubelet[2684]: W1028 23:45:05.371746 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.371777 kubelet[2684]: E1028 23:45:05.371755 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.372663 kubelet[2684]: E1028 23:45:05.372646 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.372663 kubelet[2684]: W1028 23:45:05.372662 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.372739 kubelet[2684]: E1028 23:45:05.372674 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.372941 kubelet[2684]: E1028 23:45:05.372884 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.372985 kubelet[2684]: W1028 23:45:05.372943 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.372985 kubelet[2684]: E1028 23:45:05.372955 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.373144 kubelet[2684]: E1028 23:45:05.373131 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.373144 kubelet[2684]: W1028 23:45:05.373142 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.373251 kubelet[2684]: E1028 23:45:05.373151 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.373597 kubelet[2684]: E1028 23:45:05.373574 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.373597 kubelet[2684]: W1028 23:45:05.373596 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.373747 kubelet[2684]: E1028 23:45:05.373608 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.373930 kubelet[2684]: E1028 23:45:05.373914 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.374154 kubelet[2684]: W1028 23:45:05.374131 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.374194 kubelet[2684]: E1028 23:45:05.374157 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.374576 kubelet[2684]: E1028 23:45:05.374435 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.374576 kubelet[2684]: W1028 23:45:05.374548 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.374576 kubelet[2684]: E1028 23:45:05.374560 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.375258 kubelet[2684]: E1028 23:45:05.375240 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.375258 kubelet[2684]: W1028 23:45:05.375255 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.375337 kubelet[2684]: E1028 23:45:05.375267 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.377703 kubelet[2684]: E1028 23:45:05.377685 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.377703 kubelet[2684]: W1028 23:45:05.377703 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.377816 kubelet[2684]: E1028 23:45:05.377716 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.377995 kubelet[2684]: E1028 23:45:05.377884 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.377995 kubelet[2684]: W1028 23:45:05.377896 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.377995 kubelet[2684]: E1028 23:45:05.377920 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.379114 kubelet[2684]: E1028 23:45:05.379095 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.379114 kubelet[2684]: W1028 23:45:05.379110 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.379321 kubelet[2684]: E1028 23:45:05.379296 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.379921 kubelet[2684]: E1028 23:45:05.379695 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.379921 kubelet[2684]: W1028 23:45:05.379706 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.379921 kubelet[2684]: E1028 23:45:05.379717 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.380131 kubelet[2684]: E1028 23:45:05.380078 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.380131 kubelet[2684]: W1028 23:45:05.380131 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.380184 kubelet[2684]: E1028 23:45:05.380143 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.380544 kubelet[2684]: E1028 23:45:05.380531 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.380544 kubelet[2684]: W1028 23:45:05.380544 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.380628 kubelet[2684]: E1028 23:45:05.380555 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.381016 containerd[1531]: time="2025-10-28T23:45:05.380965618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmxnq,Uid:4924b6d3-9c71-41fb-9831-c6db97fc0c98,Namespace:calico-system,Attempt:0,} returns sandbox id \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\"" Oct 28 23:45:05.381065 kubelet[2684]: E1028 23:45:05.380994 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.381065 kubelet[2684]: W1028 23:45:05.381005 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.381065 kubelet[2684]: E1028 23:45:05.381015 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.381403 kubelet[2684]: E1028 23:45:05.381389 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.381403 kubelet[2684]: W1028 23:45:05.381403 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.381495 kubelet[2684]: E1028 23:45:05.381413 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.381656 kubelet[2684]: E1028 23:45:05.381645 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.381683 kubelet[2684]: W1028 23:45:05.381656 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.381683 kubelet[2684]: E1028 23:45:05.381665 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.382329 kubelet[2684]: E1028 23:45:05.382259 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.382329 kubelet[2684]: W1028 23:45:05.382279 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.382329 kubelet[2684]: E1028 23:45:05.382272 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:05.382329 kubelet[2684]: E1028 23:45:05.382291 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.383855 kubelet[2684]: E1028 23:45:05.383817 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.383931 kubelet[2684]: W1028 23:45:05.383866 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.383931 kubelet[2684]: E1028 23:45:05.383882 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:05.393446 kubelet[2684]: E1028 23:45:05.393367 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:05.393446 kubelet[2684]: W1028 23:45:05.393386 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:05.393446 kubelet[2684]: E1028 23:45:05.393401 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:06.437731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851350957.mount: Deactivated successfully. Oct 28 23:45:06.726041 kubelet[2684]: E1028 23:45:06.725712 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:07.593381 containerd[1531]: time="2025-10-28T23:45:07.593280944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:07.593967 containerd[1531]: time="2025-10-28T23:45:07.593739299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 28 23:45:07.595483 containerd[1531]: time="2025-10-28T23:45:07.595436083Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:07.598099 containerd[1531]: time="2025-10-28T23:45:07.598060618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:07.599148 containerd[1531]: time="2025-10-28T23:45:07.599121888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.34628731s" Oct 28 23:45:07.599196 containerd[1531]: time="2025-10-28T23:45:07.599156288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 28 23:45:07.600596 containerd[1531]: time="2025-10-28T23:45:07.600476715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 23:45:07.613359 containerd[1531]: time="2025-10-28T23:45:07.613318233Z" level=info msg="CreateContainer within sandbox \"8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 23:45:07.620192 containerd[1531]: time="2025-10-28T23:45:07.619606133Z" level=info msg="Container 3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:07.627187 containerd[1531]: time="2025-10-28T23:45:07.627134141Z" level=info msg="CreateContainer within sandbox \"8bd8355985b08fb7f39772b57aa5df2df540b3f69d0304427cefd8582a60fc03\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07\"" Oct 28 23:45:07.627664 containerd[1531]: time="2025-10-28T23:45:07.627638897Z" level=info msg="StartContainer for \"3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07\"" Oct 28 23:45:07.629472 containerd[1531]: time="2025-10-28T23:45:07.629014044Z" level=info msg="connecting to shim 3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07" address="unix:///run/containerd/s/aa8d2e153464d3bc1b4c645337308f8629be58d12ae5e67e2a9088fb56c5a72b" protocol=ttrpc version=3 Oct 28 23:45:07.653609 systemd[1]: Started cri-containerd-3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07.scope - libcontainer container 3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07. Oct 28 23:45:07.690436 containerd[1531]: time="2025-10-28T23:45:07.690364220Z" level=info msg="StartContainer for \"3f3f726193a30be752ac7c6f8c2d5ae2ac48577c8633a0238ffaf31bab609e07\" returns successfully" Oct 28 23:45:07.817522 kubelet[2684]: E1028 23:45:07.817478 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:07.846251 kubelet[2684]: I1028 23:45:07.845722 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cc8f56cf5-dd655" podStartSLOduration=1.4973180130000001 podStartE2EDuration="3.845578663s" podCreationTimestamp="2025-10-28 23:45:04 +0000 UTC" firstStartedPulling="2025-10-28 23:45:05.251991187 +0000 UTC m=+24.614672981" lastFinishedPulling="2025-10-28 23:45:07.600251837 +0000 UTC m=+26.962933631" observedRunningTime="2025-10-28 23:45:07.845580543 +0000 UTC m=+27.208262337" watchObservedRunningTime="2025-10-28 23:45:07.845578663 +0000 UTC m=+27.208260417" Oct 28 23:45:07.873234 kubelet[2684]: E1028 23:45:07.873181 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.873234 kubelet[2684]: W1028 23:45:07.873208 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.873234 kubelet[2684]: E1028 23:45:07.873230 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.874021 kubelet[2684]: E1028 23:45:07.873580 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.874021 kubelet[2684]: W1028 23:45:07.873592 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.874852 kubelet[2684]: E1028 23:45:07.873639 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.875130 kubelet[2684]: E1028 23:45:07.875105 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.875130 kubelet[2684]: W1028 23:45:07.875123 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.875130 kubelet[2684]: E1028 23:45:07.875138 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.876280 kubelet[2684]: E1028 23:45:07.876252 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.876280 kubelet[2684]: W1028 23:45:07.876275 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.876473 kubelet[2684]: E1028 23:45:07.876290 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.876867 kubelet[2684]: E1028 23:45:07.876825 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.876867 kubelet[2684]: W1028 23:45:07.876843 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.876867 kubelet[2684]: E1028 23:45:07.876856 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877060 kubelet[2684]: E1028 23:45:07.877035 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877060 kubelet[2684]: W1028 23:45:07.877049 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877060 kubelet[2684]: E1028 23:45:07.877058 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877233 kubelet[2684]: E1028 23:45:07.877214 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877233 kubelet[2684]: W1028 23:45:07.877224 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877233 kubelet[2684]: E1028 23:45:07.877231 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877370 kubelet[2684]: E1028 23:45:07.877353 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877370 kubelet[2684]: W1028 23:45:07.877364 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877370 kubelet[2684]: E1028 23:45:07.877371 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877563 kubelet[2684]: E1028 23:45:07.877541 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877563 kubelet[2684]: W1028 23:45:07.877552 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877563 kubelet[2684]: E1028 23:45:07.877560 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877695 kubelet[2684]: E1028 23:45:07.877679 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877695 kubelet[2684]: W1028 23:45:07.877688 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877695 kubelet[2684]: E1028 23:45:07.877696 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.877824 kubelet[2684]: E1028 23:45:07.877810 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.877824 kubelet[2684]: W1028 23:45:07.877819 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.877824 kubelet[2684]: E1028 23:45:07.877826 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.877938 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.878387 kubelet[2684]: W1028 23:45:07.877945 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.877952 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.878072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.878387 kubelet[2684]: W1028 23:45:07.878079 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.878087 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.878214 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.878387 kubelet[2684]: W1028 23:45:07.878221 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.878227 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.878387 kubelet[2684]: E1028 23:45:07.878339 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.878935 kubelet[2684]: W1028 23:45:07.878346 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.878935 kubelet[2684]: E1028 23:45:07.878352 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.894931 kubelet[2684]: E1028 23:45:07.894888 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.894931 kubelet[2684]: W1028 23:45:07.894913 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.894931 kubelet[2684]: E1028 23:45:07.894932 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.895169 kubelet[2684]: E1028 23:45:07.895114 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.895169 kubelet[2684]: W1028 23:45:07.895122 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.895169 kubelet[2684]: E1028 23:45:07.895131 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.895391 kubelet[2684]: E1028 23:45:07.895371 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.895492 kubelet[2684]: W1028 23:45:07.895390 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.895492 kubelet[2684]: E1028 23:45:07.895413 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.895586 kubelet[2684]: E1028 23:45:07.895579 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.895610 kubelet[2684]: W1028 23:45:07.895587 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.895610 kubelet[2684]: E1028 23:45:07.895595 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.895741 kubelet[2684]: E1028 23:45:07.895730 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.895741 kubelet[2684]: W1028 23:45:07.895739 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.895804 kubelet[2684]: E1028 23:45:07.895753 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.895942 kubelet[2684]: E1028 23:45:07.895929 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.895942 kubelet[2684]: W1028 23:45:07.895940 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.896009 kubelet[2684]: E1028 23:45:07.895949 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.896221 kubelet[2684]: E1028 23:45:07.896206 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.896348 kubelet[2684]: W1028 23:45:07.896334 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.896417 kubelet[2684]: E1028 23:45:07.896394 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.896704 kubelet[2684]: E1028 23:45:07.896673 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.896704 kubelet[2684]: W1028 23:45:07.896688 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.896704 kubelet[2684]: E1028 23:45:07.896698 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.896846 kubelet[2684]: E1028 23:45:07.896833 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.896846 kubelet[2684]: W1028 23:45:07.896843 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.896898 kubelet[2684]: E1028 23:45:07.896850 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.896989 kubelet[2684]: E1028 23:45:07.896979 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.896989 kubelet[2684]: W1028 23:45:07.896988 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.897033 kubelet[2684]: E1028 23:45:07.896995 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.897143 kubelet[2684]: E1028 23:45:07.897133 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.897168 kubelet[2684]: W1028 23:45:07.897143 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.897168 kubelet[2684]: E1028 23:45:07.897150 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.897407 kubelet[2684]: E1028 23:45:07.897383 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.897453 kubelet[2684]: W1028 23:45:07.897405 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.897453 kubelet[2684]: E1028 23:45:07.897418 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.897649 kubelet[2684]: E1028 23:45:07.897634 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.897649 kubelet[2684]: W1028 23:45:07.897647 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.897712 kubelet[2684]: E1028 23:45:07.897656 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.897843 kubelet[2684]: E1028 23:45:07.897829 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.897843 kubelet[2684]: W1028 23:45:07.897840 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.897887 kubelet[2684]: E1028 23:45:07.897848 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.897989 kubelet[2684]: E1028 23:45:07.897978 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.898013 kubelet[2684]: W1028 23:45:07.897989 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.898013 kubelet[2684]: E1028 23:45:07.897997 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.898181 kubelet[2684]: E1028 23:45:07.898171 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.898202 kubelet[2684]: W1028 23:45:07.898181 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.898202 kubelet[2684]: E1028 23:45:07.898192 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.898511 kubelet[2684]: E1028 23:45:07.898496 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.898511 kubelet[2684]: W1028 23:45:07.898509 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.898574 kubelet[2684]: E1028 23:45:07.898519 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:07.907804 kubelet[2684]: E1028 23:45:07.907768 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:45:07.907804 kubelet[2684]: W1028 23:45:07.907792 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:45:07.907804 kubelet[2684]: E1028 23:45:07.907810 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:45:08.671228 containerd[1531]: time="2025-10-28T23:45:08.671146367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:08.672315 containerd[1531]: time="2025-10-28T23:45:08.672284117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 28 23:45:08.673144 containerd[1531]: time="2025-10-28T23:45:08.673105669Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:08.675666 containerd[1531]: time="2025-10-28T23:45:08.675633126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:08.676419 containerd[1531]: time="2025-10-28T23:45:08.676371200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.075845005s" Oct 28 23:45:08.676419 containerd[1531]: time="2025-10-28T23:45:08.676411959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 28 23:45:08.680432 containerd[1531]: time="2025-10-28T23:45:08.680401683Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 23:45:08.705361 containerd[1531]: time="2025-10-28T23:45:08.705308855Z" level=info msg="Container 046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:08.714406 containerd[1531]: time="2025-10-28T23:45:08.714343732Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\"" Oct 28 23:45:08.715088 containerd[1531]: time="2025-10-28T23:45:08.715036366Z" level=info msg="StartContainer for \"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\"" Oct 28 23:45:08.716622 containerd[1531]: time="2025-10-28T23:45:08.716574511Z" level=info msg="connecting to shim 046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2" address="unix:///run/containerd/s/8b1f4b8fbca52a6f48a5fafa6d8e56d82545349c6d66a48a1ac4465fcd5a2c8e" protocol=ttrpc version=3 Oct 28 23:45:08.722262 kubelet[2684]: E1028 23:45:08.722226 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:08.740634 systemd[1]: Started cri-containerd-046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2.scope - libcontainer container 046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2. Oct 28 23:45:08.795852 systemd[1]: cri-containerd-046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2.scope: Deactivated successfully. Oct 28 23:45:08.799310 containerd[1531]: time="2025-10-28T23:45:08.799176595Z" level=info msg="StartContainer for \"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\" returns successfully" Oct 28 23:45:08.822156 kubelet[2684]: I1028 23:45:08.821943 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:45:08.822811 kubelet[2684]: E1028 23:45:08.822491 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:08.822811 kubelet[2684]: E1028 23:45:08.822630 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:08.829050 containerd[1531]: time="2025-10-28T23:45:08.829008282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\" id:\"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\" pid:3385 exited_at:{seconds:1761695108 nanos:819619968}" Oct 28 23:45:08.830686 containerd[1531]: time="2025-10-28T23:45:08.830621427Z" level=info msg="received exit event container_id:\"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\" id:\"046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2\" pid:3385 exited_at:{seconds:1761695108 nanos:819619968}" Oct 28 23:45:08.893875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-046c6ac183a41ff372b3a1b2679e85cc15c2e7567eae0ed00a76fe1d294897a2-rootfs.mount: Deactivated successfully. Oct 28 23:45:09.829426 kubelet[2684]: E1028 23:45:09.829049 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:09.830628 containerd[1531]: time="2025-10-28T23:45:09.830592506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 23:45:09.974085 kubelet[2684]: I1028 23:45:09.973982 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:45:09.974455 kubelet[2684]: E1028 23:45:09.974414 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:10.722817 kubelet[2684]: E1028 23:45:10.722775 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:10.831034 kubelet[2684]: E1028 23:45:10.830940 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:12.439776 containerd[1531]: time="2025-10-28T23:45:12.439718990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:12.441001 containerd[1531]: time="2025-10-28T23:45:12.440968060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 28 23:45:12.443608 containerd[1531]: time="2025-10-28T23:45:12.443541480Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:12.446579 containerd[1531]: time="2025-10-28T23:45:12.446283258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:12.447010 containerd[1531]: time="2025-10-28T23:45:12.446982973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.616350907s" Oct 28 23:45:12.447090 containerd[1531]: time="2025-10-28T23:45:12.447075532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 28 23:45:12.451046 containerd[1531]: time="2025-10-28T23:45:12.451012661Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 23:45:12.458560 containerd[1531]: time="2025-10-28T23:45:12.458522201Z" level=info msg="Container a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:12.466900 containerd[1531]: time="2025-10-28T23:45:12.466860055Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\"" Oct 28 23:45:12.467484 containerd[1531]: time="2025-10-28T23:45:12.467431730Z" level=info msg="StartContainer for \"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\"" Oct 28 23:45:12.470463 containerd[1531]: time="2025-10-28T23:45:12.470296427Z" level=info msg="connecting to shim a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168" address="unix:///run/containerd/s/8b1f4b8fbca52a6f48a5fafa6d8e56d82545349c6d66a48a1ac4465fcd5a2c8e" protocol=ttrpc version=3 Oct 28 23:45:12.496633 systemd[1]: Started cri-containerd-a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168.scope - libcontainer container a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168. Oct 28 23:45:12.533758 containerd[1531]: time="2025-10-28T23:45:12.533720084Z" level=info msg="StartContainer for \"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\" returns successfully" Oct 28 23:45:12.723623 kubelet[2684]: E1028 23:45:12.722336 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:12.837880 kubelet[2684]: E1028 23:45:12.837791 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:13.150073 systemd[1]: cri-containerd-a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168.scope: Deactivated successfully. Oct 28 23:45:13.150356 systemd[1]: cri-containerd-a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168.scope: Consumed 482ms CPU time, 176.7M memory peak, 2.3M read from disk, 165.9M written to disk. Oct 28 23:45:13.151457 containerd[1531]: time="2025-10-28T23:45:13.151359177Z" level=info msg="received exit event container_id:\"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\" id:\"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\" pid:3449 exited_at:{seconds:1761695113 nanos:150946540}" Oct 28 23:45:13.151737 containerd[1531]: time="2025-10-28T23:45:13.151713814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\" id:\"a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168\" pid:3449 exited_at:{seconds:1761695113 nanos:150946540}" Oct 28 23:45:13.162910 kubelet[2684]: I1028 23:45:13.162876 2684 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 28 23:45:13.175372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a98469e4736e93a8fa9b2d0e5d820b2fa4dbce26958bf16455ab5ecac1391168-rootfs.mount: Deactivated successfully. Oct 28 23:45:13.218984 systemd[1]: Created slice kubepods-burstable-podc19cc983_98a3_458b_a49c_1ccea440545a.slice - libcontainer container kubepods-burstable-podc19cc983_98a3_458b_a49c_1ccea440545a.slice. Oct 28 23:45:13.234360 kubelet[2684]: I1028 23:45:13.234305 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c19cc983-98a3-458b-a49c-1ccea440545a-config-volume\") pod \"coredns-66bc5c9577-tsp9j\" (UID: \"c19cc983-98a3-458b-a49c-1ccea440545a\") " pod="kube-system/coredns-66bc5c9577-tsp9j" Oct 28 23:45:13.235134 kubelet[2684]: I1028 23:45:13.234404 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fldgw\" (UniqueName: \"kubernetes.io/projected/c19cc983-98a3-458b-a49c-1ccea440545a-kube-api-access-fldgw\") pod \"coredns-66bc5c9577-tsp9j\" (UID: \"c19cc983-98a3-458b-a49c-1ccea440545a\") " pod="kube-system/coredns-66bc5c9577-tsp9j" Oct 28 23:45:13.240098 systemd[1]: Created slice kubepods-burstable-pod3b25aa30_f019_4265_b023_79afab8fe52e.slice - libcontainer container kubepods-burstable-pod3b25aa30_f019_4265_b023_79afab8fe52e.slice. Oct 28 23:45:13.249146 systemd[1]: Created slice kubepods-besteffort-pod51dc260a_540b_4f02_a0f1_2e415e73ff2c.slice - libcontainer container kubepods-besteffort-pod51dc260a_540b_4f02_a0f1_2e415e73ff2c.slice. Oct 28 23:45:13.256008 systemd[1]: Created slice kubepods-besteffort-pod39576903_7c39_47bd_b6fa_990764734118.slice - libcontainer container kubepods-besteffort-pod39576903_7c39_47bd_b6fa_990764734118.slice. Oct 28 23:45:13.261608 systemd[1]: Created slice kubepods-besteffort-pod77168d35_e1f8_4112_9d0d_c414c5ff0981.slice - libcontainer container kubepods-besteffort-pod77168d35_e1f8_4112_9d0d_c414c5ff0981.slice. Oct 28 23:45:13.267861 systemd[1]: Created slice kubepods-besteffort-pod37c97580_71cc_4bc9_9010_0bb18fd1ed99.slice - libcontainer container kubepods-besteffort-pod37c97580_71cc_4bc9_9010_0bb18fd1ed99.slice. Oct 28 23:45:13.273085 systemd[1]: Created slice kubepods-besteffort-pod2d30b29a_2608_4dc4_a762_9ebd83a9d186.slice - libcontainer container kubepods-besteffort-pod2d30b29a_2608_4dc4_a762_9ebd83a9d186.slice. Oct 28 23:45:13.277499 systemd[1]: Created slice kubepods-besteffort-podc7a1e2dd_52c0_45e7_a13b_cfdddc111238.slice - libcontainer container kubepods-besteffort-podc7a1e2dd_52c0_45e7_a13b_cfdddc111238.slice. Oct 28 23:45:13.335001 kubelet[2684]: I1028 23:45:13.334948 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj55s\" (UniqueName: \"kubernetes.io/projected/37c97580-71cc-4bc9-9010-0bb18fd1ed99-kube-api-access-hj55s\") pod \"calico-kube-controllers-d89cc7458-8hgnf\" (UID: \"37c97580-71cc-4bc9-9010-0bb18fd1ed99\") " pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" Oct 28 23:45:13.335001 kubelet[2684]: I1028 23:45:13.334994 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7a1e2dd-52c0-45e7-a13b-cfdddc111238-config\") pod \"goldmane-7c778bb748-rc79g\" (UID: \"c7a1e2dd-52c0-45e7-a13b-cfdddc111238\") " pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.335001 kubelet[2684]: I1028 23:45:13.335012 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a1e2dd-52c0-45e7-a13b-cfdddc111238-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-rc79g\" (UID: \"c7a1e2dd-52c0-45e7-a13b-cfdddc111238\") " pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.335202 kubelet[2684]: I1028 23:45:13.335106 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a1e2dd-52c0-45e7-a13b-cfdddc111238-goldmane-key-pair\") pod \"goldmane-7c778bb748-rc79g\" (UID: \"c7a1e2dd-52c0-45e7-a13b-cfdddc111238\") " pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.335202 kubelet[2684]: I1028 23:45:13.335147 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp54s\" (UniqueName: \"kubernetes.io/projected/c7a1e2dd-52c0-45e7-a13b-cfdddc111238-kube-api-access-dp54s\") pod \"goldmane-7c778bb748-rc79g\" (UID: \"c7a1e2dd-52c0-45e7-a13b-cfdddc111238\") " pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.335202 kubelet[2684]: I1028 23:45:13.335187 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77168d35-e1f8-4112-9d0d-c414c5ff0981-calico-apiserver-certs\") pod \"calico-apiserver-6c596cb8fc-mxxk5\" (UID: \"77168d35-e1f8-4112-9d0d-c414c5ff0981\") " pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" Oct 28 23:45:13.335267 kubelet[2684]: I1028 23:45:13.335214 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39576903-7c39-47bd-b6fa-990764734118-whisker-backend-key-pair\") pod \"whisker-588fb4b9d6-tcmsn\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " pod="calico-system/whisker-588fb4b9d6-tcmsn" Oct 28 23:45:13.335267 kubelet[2684]: I1028 23:45:13.335232 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj2jl\" (UniqueName: \"kubernetes.io/projected/2d30b29a-2608-4dc4-a762-9ebd83a9d186-kube-api-access-mj2jl\") pod \"calico-apiserver-548d874589-gxsbt\" (UID: \"2d30b29a-2608-4dc4-a762-9ebd83a9d186\") " pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" Oct 28 23:45:13.335267 kubelet[2684]: I1028 23:45:13.335246 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkmg\" (UniqueName: \"kubernetes.io/projected/51dc260a-540b-4f02-a0f1-2e415e73ff2c-kube-api-access-8nkmg\") pod \"calico-apiserver-548d874589-hbmkd\" (UID: \"51dc260a-540b-4f02-a0f1-2e415e73ff2c\") " pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" Oct 28 23:45:13.335330 kubelet[2684]: I1028 23:45:13.335273 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b25aa30-f019-4265-b023-79afab8fe52e-config-volume\") pod \"coredns-66bc5c9577-k9xnc\" (UID: \"3b25aa30-f019-4265-b023-79afab8fe52e\") " pod="kube-system/coredns-66bc5c9577-k9xnc" Oct 28 23:45:13.335330 kubelet[2684]: I1028 23:45:13.335297 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvjb\" (UniqueName: \"kubernetes.io/projected/3b25aa30-f019-4265-b023-79afab8fe52e-kube-api-access-dvvjb\") pod \"coredns-66bc5c9577-k9xnc\" (UID: \"3b25aa30-f019-4265-b023-79afab8fe52e\") " pod="kube-system/coredns-66bc5c9577-k9xnc" Oct 28 23:45:13.335330 kubelet[2684]: I1028 23:45:13.335311 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51dc260a-540b-4f02-a0f1-2e415e73ff2c-calico-apiserver-certs\") pod \"calico-apiserver-548d874589-hbmkd\" (UID: \"51dc260a-540b-4f02-a0f1-2e415e73ff2c\") " pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" Oct 28 23:45:13.335330 kubelet[2684]: I1028 23:45:13.335326 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39576903-7c39-47bd-b6fa-990764734118-whisker-ca-bundle\") pod \"whisker-588fb4b9d6-tcmsn\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " pod="calico-system/whisker-588fb4b9d6-tcmsn" Oct 28 23:45:13.335454 kubelet[2684]: I1028 23:45:13.335353 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rbz\" (UniqueName: \"kubernetes.io/projected/39576903-7c39-47bd-b6fa-990764734118-kube-api-access-r7rbz\") pod \"whisker-588fb4b9d6-tcmsn\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " pod="calico-system/whisker-588fb4b9d6-tcmsn" Oct 28 23:45:13.335454 kubelet[2684]: I1028 23:45:13.335371 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37c97580-71cc-4bc9-9010-0bb18fd1ed99-tigera-ca-bundle\") pod \"calico-kube-controllers-d89cc7458-8hgnf\" (UID: \"37c97580-71cc-4bc9-9010-0bb18fd1ed99\") " pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" Oct 28 23:45:13.335454 kubelet[2684]: I1028 23:45:13.335391 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2d30b29a-2608-4dc4-a762-9ebd83a9d186-calico-apiserver-certs\") pod \"calico-apiserver-548d874589-gxsbt\" (UID: \"2d30b29a-2608-4dc4-a762-9ebd83a9d186\") " pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" Oct 28 23:45:13.335454 kubelet[2684]: I1028 23:45:13.335406 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7d4\" (UniqueName: \"kubernetes.io/projected/77168d35-e1f8-4112-9d0d-c414c5ff0981-kube-api-access-wt7d4\") pod \"calico-apiserver-6c596cb8fc-mxxk5\" (UID: \"77168d35-e1f8-4112-9d0d-c414c5ff0981\") " pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" Oct 28 23:45:13.530712 kubelet[2684]: E1028 23:45:13.530679 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:13.533074 containerd[1531]: time="2025-10-28T23:45:13.533030564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tsp9j,Uid:c19cc983-98a3-458b-a49c-1ccea440545a,Namespace:kube-system,Attempt:0,}" Oct 28 23:45:13.548870 kubelet[2684]: E1028 23:45:13.548807 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:13.549609 containerd[1531]: time="2025-10-28T23:45:13.549572197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k9xnc,Uid:3b25aa30-f019-4265-b023-79afab8fe52e,Namespace:kube-system,Attempt:0,}" Oct 28 23:45:13.556474 containerd[1531]: time="2025-10-28T23:45:13.555943748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-hbmkd,Uid:51dc260a-540b-4f02-a0f1-2e415e73ff2c,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:13.569713 containerd[1531]: time="2025-10-28T23:45:13.569670083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c596cb8fc-mxxk5,Uid:77168d35-e1f8-4112-9d0d-c414c5ff0981,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:13.569913 containerd[1531]: time="2025-10-28T23:45:13.569759882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588fb4b9d6-tcmsn,Uid:39576903-7c39-47bd-b6fa-990764734118,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:13.575750 containerd[1531]: time="2025-10-28T23:45:13.575614757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d89cc7458-8hgnf,Uid:37c97580-71cc-4bc9-9010-0bb18fd1ed99,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:13.579730 containerd[1531]: time="2025-10-28T23:45:13.579686206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-gxsbt,Uid:2d30b29a-2608-4dc4-a762-9ebd83a9d186,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:13.582987 containerd[1531]: time="2025-10-28T23:45:13.582949021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rc79g,Uid:c7a1e2dd-52c0-45e7-a13b-cfdddc111238,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:13.696225 containerd[1531]: time="2025-10-28T23:45:13.696158431Z" level=error msg="Failed to destroy network for sandbox \"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.696693 containerd[1531]: time="2025-10-28T23:45:13.696627747Z" level=error msg="Failed to destroy network for sandbox \"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.696795 containerd[1531]: time="2025-10-28T23:45:13.696761226Z" level=error msg="Failed to destroy network for sandbox \"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.698289 containerd[1531]: time="2025-10-28T23:45:13.698245975Z" level=error msg="Failed to destroy network for sandbox \"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.698681 containerd[1531]: time="2025-10-28T23:45:13.698644412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-588fb4b9d6-tcmsn,Uid:39576903-7c39-47bd-b6fa-990764734118,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.699551 containerd[1531]: time="2025-10-28T23:45:13.699502125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c596cb8fc-mxxk5,Uid:77168d35-e1f8-4112-9d0d-c414c5ff0981,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.700882 kubelet[2684]: E1028 23:45:13.700275 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.700882 kubelet[2684]: E1028 23:45:13.700521 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" Oct 28 23:45:13.700882 kubelet[2684]: E1028 23:45:13.700556 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" Oct 28 23:45:13.701110 kubelet[2684]: E1028 23:45:13.700638 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c596cb8fc-mxxk5_calico-apiserver(77168d35-e1f8-4112-9d0d-c414c5ff0981)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c596cb8fc-mxxk5_calico-apiserver(77168d35-e1f8-4112-9d0d-c414c5ff0981)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dce687295e0aa4fd3683ab39a0ffd8a56b46702269f4c51a3d5585efc9a92934\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:13.704314 containerd[1531]: time="2025-10-28T23:45:13.704136209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-hbmkd,Uid:51dc260a-540b-4f02-a0f1-2e415e73ff2c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.704749 kubelet[2684]: E1028 23:45:13.704559 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.704834 kubelet[2684]: E1028 23:45:13.704754 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" Oct 28 23:45:13.704834 kubelet[2684]: E1028 23:45:13.704775 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" Oct 28 23:45:13.704931 kubelet[2684]: E1028 23:45:13.704832 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548d874589-hbmkd_calico-apiserver(51dc260a-540b-4f02-a0f1-2e415e73ff2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548d874589-hbmkd_calico-apiserver(51dc260a-540b-4f02-a0f1-2e415e73ff2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe93fc0acee86936fd18d20bdee1fbe84e5d0a89c1147120101c627977bd3e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:13.705005 kubelet[2684]: E1028 23:45:13.704963 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.705033 kubelet[2684]: E1028 23:45:13.705010 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588fb4b9d6-tcmsn" Oct 28 23:45:13.705055 kubelet[2684]: E1028 23:45:13.705030 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-588fb4b9d6-tcmsn" Oct 28 23:45:13.705100 kubelet[2684]: E1028 23:45:13.705068 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-588fb4b9d6-tcmsn_calico-system(39576903-7c39-47bd-b6fa-990764734118)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-588fb4b9d6-tcmsn_calico-system(39576903-7c39-47bd-b6fa-990764734118)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d60dff8a6deaf931a1574641331c89f39797b9b365e3f2d5ef3ddf6c7d1bb21a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-588fb4b9d6-tcmsn" podUID="39576903-7c39-47bd-b6fa-990764734118" Oct 28 23:45:13.705769 containerd[1531]: time="2025-10-28T23:45:13.705721957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k9xnc,Uid:3b25aa30-f019-4265-b023-79afab8fe52e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.705896 kubelet[2684]: E1028 23:45:13.705872 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.705956 kubelet[2684]: E1028 23:45:13.705906 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k9xnc" Oct 28 23:45:13.705956 kubelet[2684]: E1028 23:45:13.705922 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k9xnc" Oct 28 23:45:13.706028 kubelet[2684]: E1028 23:45:13.705952 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k9xnc_kube-system(3b25aa30-f019-4265-b023-79afab8fe52e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k9xnc_kube-system(3b25aa30-f019-4265-b023-79afab8fe52e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8748329f34ff7d7d86ad87800fa07b464ed37d794160afa968033f358a4e03c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k9xnc" podUID="3b25aa30-f019-4265-b023-79afab8fe52e" Oct 28 23:45:13.718892 containerd[1531]: time="2025-10-28T23:45:13.718817097Z" level=error msg="Failed to destroy network for sandbox \"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.721707 containerd[1531]: time="2025-10-28T23:45:13.721659715Z" level=error msg="Failed to destroy network for sandbox \"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.723108 containerd[1531]: time="2025-10-28T23:45:13.723003064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tsp9j,Uid:c19cc983-98a3-458b-a49c-1ccea440545a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.723695 kubelet[2684]: E1028 23:45:13.723405 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.723695 kubelet[2684]: E1028 23:45:13.723554 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tsp9j" Oct 28 23:45:13.723695 kubelet[2684]: E1028 23:45:13.723573 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tsp9j" Oct 28 23:45:13.724284 kubelet[2684]: E1028 23:45:13.723644 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tsp9j_kube-system(c19cc983-98a3-458b-a49c-1ccea440545a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tsp9j_kube-system(c19cc983-98a3-458b-a49c-1ccea440545a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26de7077a0bc204e437eab0ab30b170929ce2c6f3f7b908a352af7a5cd98692c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tsp9j" podUID="c19cc983-98a3-458b-a49c-1ccea440545a" Oct 28 23:45:13.725661 containerd[1531]: time="2025-10-28T23:45:13.725425926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d89cc7458-8hgnf,Uid:37c97580-71cc-4bc9-9010-0bb18fd1ed99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.727089 kubelet[2684]: E1028 23:45:13.727049 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.727170 kubelet[2684]: E1028 23:45:13.727107 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" Oct 28 23:45:13.727170 kubelet[2684]: E1028 23:45:13.727126 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" Oct 28 23:45:13.727229 kubelet[2684]: E1028 23:45:13.727180 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d89cc7458-8hgnf_calico-system(37c97580-71cc-4bc9-9010-0bb18fd1ed99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d89cc7458-8hgnf_calico-system(37c97580-71cc-4bc9-9010-0bb18fd1ed99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"221b3684fc0f3dfeabe6eead81d56a972959ad13c1d09d0e59d45126092764d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:13.728586 containerd[1531]: time="2025-10-28T23:45:13.728549822Z" level=error msg="Failed to destroy network for sandbox \"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.730569 containerd[1531]: time="2025-10-28T23:45:13.730525727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rc79g,Uid:c7a1e2dd-52c0-45e7-a13b-cfdddc111238,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.730756 kubelet[2684]: E1028 23:45:13.730723 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.730795 kubelet[2684]: E1028 23:45:13.730771 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.730795 kubelet[2684]: E1028 23:45:13.730791 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-rc79g" Oct 28 23:45:13.730856 kubelet[2684]: E1028 23:45:13.730832 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-rc79g_calico-system(c7a1e2dd-52c0-45e7-a13b-cfdddc111238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-rc79g_calico-system(c7a1e2dd-52c0-45e7-a13b-cfdddc111238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6153acdecc867b0eae35f6589e10b315e5f5c5943095c362ab110ace3560071d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:13.733228 containerd[1531]: time="2025-10-28T23:45:13.733185266Z" level=error msg="Failed to destroy network for sandbox \"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.734351 containerd[1531]: time="2025-10-28T23:45:13.734189419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-gxsbt,Uid:2d30b29a-2608-4dc4-a762-9ebd83a9d186,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.734452 kubelet[2684]: E1028 23:45:13.734411 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:13.734487 kubelet[2684]: E1028 23:45:13.734461 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" Oct 28 23:45:13.734487 kubelet[2684]: E1028 23:45:13.734478 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" Oct 28 23:45:13.734544 kubelet[2684]: E1028 23:45:13.734519 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-548d874589-gxsbt_calico-apiserver(2d30b29a-2608-4dc4-a762-9ebd83a9d186)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-548d874589-gxsbt_calico-apiserver(2d30b29a-2608-4dc4-a762-9ebd83a9d186)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02b77c96bbce8453d0d7e91325b3c94a18a5a747a266d0c188af81ccd5b2885d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:13.842523 kubelet[2684]: E1028 23:45:13.842389 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:13.844392 containerd[1531]: time="2025-10-28T23:45:13.844315412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 23:45:14.460219 systemd[1]: run-netns-cni\x2dce4cb0b8\x2de9e5\x2d9e74\x2d2695\x2d73de3a475587.mount: Deactivated successfully. Oct 28 23:45:14.460328 systemd[1]: run-netns-cni\x2dce955cf1\x2dd809\x2d61dc\x2dab57\x2d522c496dbf0c.mount: Deactivated successfully. Oct 28 23:45:14.460385 systemd[1]: run-netns-cni\x2ded02bfb1\x2d6bab\x2d359f\x2dffae\x2da5ff1541f88e.mount: Deactivated successfully. Oct 28 23:45:14.732128 systemd[1]: Created slice kubepods-besteffort-pod48b595fd_60f3_4e0e_96da_2d837a2764a7.slice - libcontainer container kubepods-besteffort-pod48b595fd_60f3_4e0e_96da_2d837a2764a7.slice. Oct 28 23:45:14.736006 containerd[1531]: time="2025-10-28T23:45:14.735963019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4shv,Uid:48b595fd-60f3-4e0e-96da-2d837a2764a7,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:14.786135 containerd[1531]: time="2025-10-28T23:45:14.786050926Z" level=error msg="Failed to destroy network for sandbox \"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:14.788029 containerd[1531]: time="2025-10-28T23:45:14.787965311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4shv,Uid:48b595fd-60f3-4e0e-96da-2d837a2764a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:14.788003 systemd[1]: run-netns-cni\x2d16765031\x2d487b\x2d7800\x2d88d4\x2d15aefe79a619.mount: Deactivated successfully. Oct 28 23:45:14.789015 kubelet[2684]: E1028 23:45:14.788650 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:45:14.789015 kubelet[2684]: E1028 23:45:14.788712 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:14.789015 kubelet[2684]: E1028 23:45:14.788732 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4shv" Oct 28 23:45:14.789413 kubelet[2684]: E1028 23:45:14.788814 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97aeb7542044327bbce38dc4f9a5acfc372169b935476f8dc0079c23713e9ab3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:17.840968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772489400.mount: Deactivated successfully. Oct 28 23:45:18.081761 containerd[1531]: time="2025-10-28T23:45:18.057175052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 28 23:45:18.082110 containerd[1531]: time="2025-10-28T23:45:18.060137712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.21577506s" Oct 28 23:45:18.082110 containerd[1531]: time="2025-10-28T23:45:18.081790849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 28 23:45:18.082110 containerd[1531]: time="2025-10-28T23:45:18.065529957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:18.082651 containerd[1531]: time="2025-10-28T23:45:18.082606084Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:18.083132 containerd[1531]: time="2025-10-28T23:45:18.083103960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:45:18.103890 containerd[1531]: time="2025-10-28T23:45:18.103798343Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 23:45:18.287727 containerd[1531]: time="2025-10-28T23:45:18.287679367Z" level=info msg="Container 5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:18.289200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361024134.mount: Deactivated successfully. Oct 28 23:45:18.297812 containerd[1531]: time="2025-10-28T23:45:18.297749740Z" level=info msg="CreateContainer within sandbox \"9709eb9e230e1779d5b817ab9592c91632910c3aace521797addf95ea6602771\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\"" Oct 28 23:45:18.299058 containerd[1531]: time="2025-10-28T23:45:18.298406416Z" level=info msg="StartContainer for \"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\"" Oct 28 23:45:18.300050 containerd[1531]: time="2025-10-28T23:45:18.300019565Z" level=info msg="connecting to shim 5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34" address="unix:///run/containerd/s/8b1f4b8fbca52a6f48a5fafa6d8e56d82545349c6d66a48a1ac4465fcd5a2c8e" protocol=ttrpc version=3 Oct 28 23:45:18.341618 systemd[1]: Started cri-containerd-5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34.scope - libcontainer container 5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34. Oct 28 23:45:18.376546 containerd[1531]: time="2025-10-28T23:45:18.376337780Z" level=info msg="StartContainer for \"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\" returns successfully" Oct 28 23:45:18.496972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 23:45:18.497064 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 23:45:18.669641 kubelet[2684]: I1028 23:45:18.669452 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39576903-7c39-47bd-b6fa-990764734118-whisker-backend-key-pair\") pod \"39576903-7c39-47bd-b6fa-990764734118\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " Oct 28 23:45:18.669641 kubelet[2684]: I1028 23:45:18.669500 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39576903-7c39-47bd-b6fa-990764734118-whisker-ca-bundle\") pod \"39576903-7c39-47bd-b6fa-990764734118\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " Oct 28 23:45:18.669641 kubelet[2684]: I1028 23:45:18.669535 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7rbz\" (UniqueName: \"kubernetes.io/projected/39576903-7c39-47bd-b6fa-990764734118-kube-api-access-r7rbz\") pod \"39576903-7c39-47bd-b6fa-990764734118\" (UID: \"39576903-7c39-47bd-b6fa-990764734118\") " Oct 28 23:45:18.683981 kubelet[2684]: I1028 23:45:18.683926 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39576903-7c39-47bd-b6fa-990764734118-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "39576903-7c39-47bd-b6fa-990764734118" (UID: "39576903-7c39-47bd-b6fa-990764734118"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 23:45:18.685958 kubelet[2684]: I1028 23:45:18.685610 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39576903-7c39-47bd-b6fa-990764734118-kube-api-access-r7rbz" (OuterVolumeSpecName: "kube-api-access-r7rbz") pod "39576903-7c39-47bd-b6fa-990764734118" (UID: "39576903-7c39-47bd-b6fa-990764734118"). InnerVolumeSpecName "kube-api-access-r7rbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 23:45:18.690897 kubelet[2684]: I1028 23:45:18.690866 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39576903-7c39-47bd-b6fa-990764734118-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "39576903-7c39-47bd-b6fa-990764734118" (UID: "39576903-7c39-47bd-b6fa-990764734118"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 23:45:18.733535 systemd[1]: Removed slice kubepods-besteffort-pod39576903_7c39_47bd_b6fa_990764734118.slice - libcontainer container kubepods-besteffort-pod39576903_7c39_47bd_b6fa_990764734118.slice. Oct 28 23:45:18.770764 kubelet[2684]: I1028 23:45:18.770730 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r7rbz\" (UniqueName: \"kubernetes.io/projected/39576903-7c39-47bd-b6fa-990764734118-kube-api-access-r7rbz\") on node \"localhost\" DevicePath \"\"" Oct 28 23:45:18.770924 kubelet[2684]: I1028 23:45:18.770896 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39576903-7c39-47bd-b6fa-990764734118-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 23:45:18.770924 kubelet[2684]: I1028 23:45:18.770910 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39576903-7c39-47bd-b6fa-990764734118-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 23:45:18.841809 systemd[1]: var-lib-kubelet-pods-39576903\x2d7c39\x2d47bd\x2db6fa\x2d990764734118-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7rbz.mount: Deactivated successfully. Oct 28 23:45:18.841910 systemd[1]: var-lib-kubelet-pods-39576903\x2d7c39\x2d47bd\x2db6fa\x2d990764734118-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 23:45:18.855719 kubelet[2684]: E1028 23:45:18.855679 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:18.878553 kubelet[2684]: I1028 23:45:18.878486 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zmxnq" podStartSLOduration=2.172597688 podStartE2EDuration="14.871718462s" podCreationTimestamp="2025-10-28 23:45:04 +0000 UTC" firstStartedPulling="2025-10-28 23:45:05.384229265 +0000 UTC m=+24.746911059" lastFinishedPulling="2025-10-28 23:45:18.083350039 +0000 UTC m=+37.446031833" observedRunningTime="2025-10-28 23:45:18.870408871 +0000 UTC m=+38.233090665" watchObservedRunningTime="2025-10-28 23:45:18.871718462 +0000 UTC m=+38.234400256" Oct 28 23:45:18.918815 systemd[1]: Created slice kubepods-besteffort-podfae43e44_3d5c_47be_b1b5_59a7cbe16d74.slice - libcontainer container kubepods-besteffort-podfae43e44_3d5c_47be_b1b5_59a7cbe16d74.slice. Oct 28 23:45:18.972959 kubelet[2684]: I1028 23:45:18.972831 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae43e44-3d5c-47be-b1b5-59a7cbe16d74-whisker-ca-bundle\") pod \"whisker-768f486948-gzj7l\" (UID: \"fae43e44-3d5c-47be-b1b5-59a7cbe16d74\") " pod="calico-system/whisker-768f486948-gzj7l" Oct 28 23:45:18.972959 kubelet[2684]: I1028 23:45:18.972886 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fae43e44-3d5c-47be-b1b5-59a7cbe16d74-whisker-backend-key-pair\") pod \"whisker-768f486948-gzj7l\" (UID: \"fae43e44-3d5c-47be-b1b5-59a7cbe16d74\") " pod="calico-system/whisker-768f486948-gzj7l" Oct 28 23:45:18.972959 kubelet[2684]: I1028 23:45:18.972904 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zmq\" (UniqueName: \"kubernetes.io/projected/fae43e44-3d5c-47be-b1b5-59a7cbe16d74-kube-api-access-t9zmq\") pod \"whisker-768f486948-gzj7l\" (UID: \"fae43e44-3d5c-47be-b1b5-59a7cbe16d74\") " pod="calico-system/whisker-768f486948-gzj7l" Oct 28 23:45:19.225920 containerd[1531]: time="2025-10-28T23:45:19.225806598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768f486948-gzj7l,Uid:fae43e44-3d5c-47be-b1b5-59a7cbe16d74,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:19.388614 systemd-networkd[1432]: cali4722c0b7abb: Link UP Oct 28 23:45:19.389294 systemd-networkd[1432]: cali4722c0b7abb: Gained carrier Oct 28 23:45:19.402033 containerd[1531]: 2025-10-28 23:45:19.245 [INFO][3859] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 23:45:19.402033 containerd[1531]: 2025-10-28 23:45:19.278 [INFO][3859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--768f486948--gzj7l-eth0 whisker-768f486948- calico-system fae43e44-3d5c-47be-b1b5-59a7cbe16d74 932 0 2025-10-28 23:45:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:768f486948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-768f486948-gzj7l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4722c0b7abb [] [] }} ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-" Oct 28 23:45:19.402033 containerd[1531]: 2025-10-28 23:45:19.278 [INFO][3859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402033 containerd[1531]: 2025-10-28 23:45:19.340 [INFO][3873] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" HandleID="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Workload="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.340 [INFO][3873] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" HandleID="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Workload="localhost-k8s-whisker--768f486948--gzj7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000500350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-768f486948-gzj7l", "timestamp":"2025-10-28 23:45:19.34042354 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.340 [INFO][3873] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.340 [INFO][3873] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.340 [INFO][3873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.353 [INFO][3873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" host="localhost" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.360 [INFO][3873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.364 [INFO][3873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.366 [INFO][3873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.368 [INFO][3873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:19.402250 containerd[1531]: 2025-10-28 23:45:19.368 [INFO][3873] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" host="localhost" Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.370 [INFO][3873] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334 Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.373 [INFO][3873] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" host="localhost" Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.379 [INFO][3873] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" host="localhost" Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.379 [INFO][3873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" host="localhost" Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.379 [INFO][3873] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:19.402597 containerd[1531]: 2025-10-28 23:45:19.379 [INFO][3873] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" HandleID="k8s-pod-network.076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Workload="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402708 containerd[1531]: 2025-10-28 23:45:19.381 [INFO][3859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--768f486948--gzj7l-eth0", GenerateName:"whisker-768f486948-", Namespace:"calico-system", SelfLink:"", UID:"fae43e44-3d5c-47be-b1b5-59a7cbe16d74", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768f486948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-768f486948-gzj7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4722c0b7abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:19.402708 containerd[1531]: 2025-10-28 23:45:19.381 [INFO][3859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402779 containerd[1531]: 2025-10-28 23:45:19.381 [INFO][3859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4722c0b7abb ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402779 containerd[1531]: 2025-10-28 23:45:19.390 [INFO][3859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.402817 containerd[1531]: 2025-10-28 23:45:19.390 [INFO][3859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--768f486948--gzj7l-eth0", GenerateName:"whisker-768f486948-", Namespace:"calico-system", SelfLink:"", UID:"fae43e44-3d5c-47be-b1b5-59a7cbe16d74", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768f486948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334", Pod:"whisker-768f486948-gzj7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4722c0b7abb", MAC:"aa:a6:2c:9c:64:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:19.402862 containerd[1531]: 2025-10-28 23:45:19.398 [INFO][3859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" Namespace="calico-system" Pod="whisker-768f486948-gzj7l" WorkloadEndpoint="localhost-k8s-whisker--768f486948--gzj7l-eth0" Oct 28 23:45:19.470696 containerd[1531]: time="2025-10-28T23:45:19.470650301Z" level=info msg="connecting to shim 076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334" address="unix:///run/containerd/s/11a69bc38e503b37eda3c18b24f5597cec918a17da5b9e55cc86fba928fd7b48" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:19.495618 systemd[1]: Started cri-containerd-076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334.scope - libcontainer container 076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334. Oct 28 23:45:19.506229 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:19.532041 containerd[1531]: time="2025-10-28T23:45:19.532004386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768f486948-gzj7l,Uid:fae43e44-3d5c-47be-b1b5-59a7cbe16d74,Namespace:calico-system,Attempt:0,} returns sandbox id \"076960c6d81b7a34d818e6fa1f2960d5efc520f903eb89c77b93c31cfbdb3334\"" Oct 28 23:45:19.533807 containerd[1531]: time="2025-10-28T23:45:19.533776495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:45:19.729735 containerd[1531]: time="2025-10-28T23:45:19.729688553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:19.730661 containerd[1531]: time="2025-10-28T23:45:19.730625867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:45:19.730745 containerd[1531]: time="2025-10-28T23:45:19.730712506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:45:19.730847 kubelet[2684]: E1028 23:45:19.730815 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:45:19.732982 kubelet[2684]: E1028 23:45:19.732895 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:45:19.738206 kubelet[2684]: E1028 23:45:19.738165 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:19.753592 containerd[1531]: time="2025-10-28T23:45:19.753412720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:45:19.858619 kubelet[2684]: I1028 23:45:19.858582 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:45:19.859015 kubelet[2684]: E1028 23:45:19.858996 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:19.959890 containerd[1531]: time="2025-10-28T23:45:19.959836470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:19.962740 containerd[1531]: time="2025-10-28T23:45:19.962684812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:45:19.964491 containerd[1531]: time="2025-10-28T23:45:19.962764411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:45:19.964591 kubelet[2684]: E1028 23:45:19.962954 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:45:19.964591 kubelet[2684]: E1028 23:45:19.963012 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:45:19.964591 kubelet[2684]: E1028 23:45:19.963087 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:19.964684 kubelet[2684]: E1028 23:45:19.963127 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768f486948-gzj7l" podUID="fae43e44-3d5c-47be-b1b5-59a7cbe16d74" Oct 28 23:45:20.174249 systemd-networkd[1432]: vxlan.calico: Link UP Oct 28 23:45:20.174256 systemd-networkd[1432]: vxlan.calico: Gained carrier Oct 28 23:45:20.724773 kubelet[2684]: I1028 23:45:20.724723 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39576903-7c39-47bd-b6fa-990764734118" path="/var/lib/kubelet/pods/39576903-7c39-47bd-b6fa-990764734118/volumes" Oct 28 23:45:20.816645 systemd-networkd[1432]: cali4722c0b7abb: Gained IPv6LL Oct 28 23:45:20.865580 kubelet[2684]: E1028 23:45:20.865505 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768f486948-gzj7l" podUID="fae43e44-3d5c-47be-b1b5-59a7cbe16d74" Oct 28 23:45:21.323622 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Oct 28 23:45:22.589414 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:53924.service - OpenSSH per-connection server daemon (10.0.0.1:53924). Oct 28 23:45:22.653781 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 53924 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:22.655351 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:22.660169 systemd-logind[1510]: New session 8 of user core. Oct 28 23:45:22.668607 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 23:45:22.822642 sshd[4146]: Connection closed by 10.0.0.1 port 53924 Oct 28 23:45:22.823180 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:22.826583 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:53924.service: Deactivated successfully. Oct 28 23:45:22.828157 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 23:45:22.828881 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Oct 28 23:45:22.830079 systemd-logind[1510]: Removed session 8. Oct 28 23:45:24.758611 containerd[1531]: time="2025-10-28T23:45:24.758538529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rc79g,Uid:c7a1e2dd-52c0-45e7-a13b-cfdddc111238,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:24.760267 containerd[1531]: time="2025-10-28T23:45:24.760123640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-gxsbt,Uid:2d30b29a-2608-4dc4-a762-9ebd83a9d186,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:24.887401 systemd-networkd[1432]: calid1ec81d6c1e: Link UP Oct 28 23:45:24.887815 systemd-networkd[1432]: calid1ec81d6c1e: Gained carrier Oct 28 23:45:24.902479 containerd[1531]: 2025-10-28 23:45:24.813 [INFO][4163] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--rc79g-eth0 goldmane-7c778bb748- calico-system c7a1e2dd-52c0-45e7-a13b-cfdddc111238 866 0 2025-10-28 23:45:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-rc79g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid1ec81d6c1e [] [] }} ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-" Oct 28 23:45:24.902479 containerd[1531]: 2025-10-28 23:45:24.813 [INFO][4163] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.902479 containerd[1531]: 2025-10-28 23:45:24.842 [INFO][4190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" HandleID="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Workload="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.842 [INFO][4190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" HandleID="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Workload="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-rc79g", "timestamp":"2025-10-28 23:45:24.842087931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.842 [INFO][4190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.842 [INFO][4190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.842 [INFO][4190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.852 [INFO][4190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" host="localhost" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.859 [INFO][4190] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.864 [INFO][4190] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.866 [INFO][4190] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.868 [INFO][4190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:24.902698 containerd[1531]: 2025-10-28 23:45:24.868 [INFO][4190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" host="localhost" Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.870 [INFO][4190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2 Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.874 [INFO][4190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" host="localhost" Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.878 [INFO][4190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" host="localhost" Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.878 [INFO][4190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" host="localhost" Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.879 [INFO][4190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:24.902905 containerd[1531]: 2025-10-28 23:45:24.879 [INFO][4190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" HandleID="k8s-pod-network.85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Workload="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.903052 containerd[1531]: 2025-10-28 23:45:24.882 [INFO][4163] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--rc79g-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c7a1e2dd-52c0-45e7-a13b-cfdddc111238", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-rc79g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1ec81d6c1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:24.903052 containerd[1531]: 2025-10-28 23:45:24.882 [INFO][4163] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.903126 containerd[1531]: 2025-10-28 23:45:24.882 [INFO][4163] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1ec81d6c1e ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.903126 containerd[1531]: 2025-10-28 23:45:24.887 [INFO][4163] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.903169 containerd[1531]: 2025-10-28 23:45:24.888 [INFO][4163] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--rc79g-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"c7a1e2dd-52c0-45e7-a13b-cfdddc111238", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2", Pod:"goldmane-7c778bb748-rc79g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1ec81d6c1e", MAC:"56:af:21:81:eb:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:24.903214 containerd[1531]: 2025-10-28 23:45:24.898 [INFO][4163] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" Namespace="calico-system" Pod="goldmane-7c778bb748-rc79g" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--rc79g-eth0" Oct 28 23:45:24.934825 containerd[1531]: time="2025-10-28T23:45:24.934780121Z" level=info msg="connecting to shim 85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2" address="unix:///run/containerd/s/ecaf9e3e74c51cee812a75eea1f727b918087b297c261b67a2ad0e1279c6e5a5" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:24.961591 systemd[1]: Started cri-containerd-85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2.scope - libcontainer container 85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2. Oct 28 23:45:24.996581 systemd-networkd[1432]: cali862b686df7d: Link UP Oct 28 23:45:24.997149 systemd-networkd[1432]: cali862b686df7d: Gained carrier Oct 28 23:45:25.004918 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:25.013069 containerd[1531]: 2025-10-28 23:45:24.815 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0 calico-apiserver-548d874589- calico-apiserver 2d30b29a-2608-4dc4-a762-9ebd83a9d186 868 0 2025-10-28 23:44:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548d874589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548d874589-gxsbt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali862b686df7d [] [] }} ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-" Oct 28 23:45:25.013069 containerd[1531]: 2025-10-28 23:45:24.815 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013069 containerd[1531]: 2025-10-28 23:45:24.843 [INFO][4196] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" HandleID="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Workload="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.843 [INFO][4196] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" HandleID="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Workload="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548d874589-gxsbt", "timestamp":"2025-10-28 23:45:24.843607363 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.843 [INFO][4196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.879 [INFO][4196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.879 [INFO][4196] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.952 [INFO][4196] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" host="localhost" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.960 [INFO][4196] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.968 [INFO][4196] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.973 [INFO][4196] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.976 [INFO][4196] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:25.013232 containerd[1531]: 2025-10-28 23:45:24.977 [INFO][4196] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" host="localhost" Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.979 [INFO][4196] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4 Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.984 [INFO][4196] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" host="localhost" Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.991 [INFO][4196] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" host="localhost" Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.991 [INFO][4196] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" host="localhost" Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.991 [INFO][4196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:25.013473 containerd[1531]: 2025-10-28 23:45:24.991 [INFO][4196] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" HandleID="k8s-pod-network.7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Workload="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013590 containerd[1531]: 2025-10-28 23:45:24.993 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0", GenerateName:"calico-apiserver-548d874589-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d30b29a-2608-4dc4-a762-9ebd83a9d186", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d874589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548d874589-gxsbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali862b686df7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:25.013639 containerd[1531]: 2025-10-28 23:45:24.993 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013639 containerd[1531]: 2025-10-28 23:45:24.993 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali862b686df7d ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013639 containerd[1531]: 2025-10-28 23:45:24.997 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.013700 containerd[1531]: 2025-10-28 23:45:24.998 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0", GenerateName:"calico-apiserver-548d874589-", Namespace:"calico-apiserver", SelfLink:"", UID:"2d30b29a-2608-4dc4-a762-9ebd83a9d186", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d874589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4", Pod:"calico-apiserver-548d874589-gxsbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali862b686df7d", MAC:"42:8f:be:dd:de:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:25.013745 containerd[1531]: 2025-10-28 23:45:25.006 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-gxsbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--gxsbt-eth0" Oct 28 23:45:25.037898 containerd[1531]: time="2025-10-28T23:45:25.037577178Z" level=info msg="connecting to shim 7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4" address="unix:///run/containerd/s/196d82a981f252bdd34d5b362430f594371039e3f216dbbf57bee556e1cd9dcb" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:25.057532 containerd[1531]: time="2025-10-28T23:45:25.057481227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-rc79g,Uid:c7a1e2dd-52c0-45e7-a13b-cfdddc111238,Namespace:calico-system,Attempt:0,} returns sandbox id \"85ed01f32a8db6fbeacb209ce983691ad73d865e15dfa1f2528311b19a9b78b2\"" Oct 28 23:45:25.060487 containerd[1531]: time="2025-10-28T23:45:25.059591615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:45:25.074614 systemd[1]: Started cri-containerd-7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4.scope - libcontainer container 7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4. Oct 28 23:45:25.086109 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:25.118672 containerd[1531]: time="2025-10-28T23:45:25.118568405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-gxsbt,Uid:2d30b29a-2608-4dc4-a762-9ebd83a9d186,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7b982a82d76c78e1d3d263a3035de5017cc0d6143923d0b69a2577a7db9c73a4\"" Oct 28 23:45:25.270528 containerd[1531]: time="2025-10-28T23:45:25.270377835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:25.271413 containerd[1531]: time="2025-10-28T23:45:25.271359389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:45:25.271413 containerd[1531]: time="2025-10-28T23:45:25.271395669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:25.271781 kubelet[2684]: E1028 23:45:25.271604 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:45:25.271781 kubelet[2684]: E1028 23:45:25.271661 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:45:25.272423 kubelet[2684]: E1028 23:45:25.271847 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rc79g_calico-system(c7a1e2dd-52c0-45e7-a13b-cfdddc111238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:25.272423 kubelet[2684]: E1028 23:45:25.271887 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:25.273576 containerd[1531]: time="2025-10-28T23:45:25.273542057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:25.488271 containerd[1531]: time="2025-10-28T23:45:25.488186975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:25.489652 containerd[1531]: time="2025-10-28T23:45:25.489611087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:25.489892 containerd[1531]: time="2025-10-28T23:45:25.489635007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:25.489952 kubelet[2684]: E1028 23:45:25.489840 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:25.489952 kubelet[2684]: E1028 23:45:25.489886 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:25.490011 kubelet[2684]: E1028 23:45:25.489957 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-gxsbt_calico-apiserver(2d30b29a-2608-4dc4-a762-9ebd83a9d186): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:25.490011 kubelet[2684]: E1028 23:45:25.489989 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:25.728556 containerd[1531]: time="2025-10-28T23:45:25.728416311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d89cc7458-8hgnf,Uid:37c97580-71cc-4bc9-9010-0bb18fd1ed99,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:25.852194 systemd-networkd[1432]: cali4963d5dabc7: Link UP Oct 28 23:45:25.852539 systemd-networkd[1432]: cali4963d5dabc7: Gained carrier Oct 28 23:45:25.869611 containerd[1531]: 2025-10-28 23:45:25.777 [INFO][4329] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0 calico-kube-controllers-d89cc7458- calico-system 37c97580-71cc-4bc9-9010-0bb18fd1ed99 864 0 2025-10-28 23:45:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d89cc7458 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d89cc7458-8hgnf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4963d5dabc7 [] [] }} ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-" Oct 28 23:45:25.869611 containerd[1531]: 2025-10-28 23:45:25.777 [INFO][4329] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.869611 containerd[1531]: 2025-10-28 23:45:25.803 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" HandleID="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Workload="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.803 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" HandleID="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Workload="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d89cc7458-8hgnf", "timestamp":"2025-10-28 23:45:25.803778329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.803 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.804 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.804 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.816 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" host="localhost" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.822 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.827 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.831 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.833 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:25.870075 containerd[1531]: 2025-10-28 23:45:25.833 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" host="localhost" Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.835 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2 Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.839 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" host="localhost" Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.845 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" host="localhost" Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.845 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" host="localhost" Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.845 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:25.870292 containerd[1531]: 2025-10-28 23:45:25.845 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" HandleID="k8s-pod-network.8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Workload="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.870530 containerd[1531]: 2025-10-28 23:45:25.848 [INFO][4329] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0", GenerateName:"calico-kube-controllers-d89cc7458-", Namespace:"calico-system", SelfLink:"", UID:"37c97580-71cc-4bc9-9010-0bb18fd1ed99", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d89cc7458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d89cc7458-8hgnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4963d5dabc7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:25.870588 containerd[1531]: 2025-10-28 23:45:25.848 [INFO][4329] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.870588 containerd[1531]: 2025-10-28 23:45:25.848 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4963d5dabc7 ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.870588 containerd[1531]: 2025-10-28 23:45:25.851 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.870685 containerd[1531]: 2025-10-28 23:45:25.852 [INFO][4329] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0", GenerateName:"calico-kube-controllers-d89cc7458-", Namespace:"calico-system", SelfLink:"", UID:"37c97580-71cc-4bc9-9010-0bb18fd1ed99", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d89cc7458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2", Pod:"calico-kube-controllers-d89cc7458-8hgnf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4963d5dabc7", MAC:"92:1a:3d:da:88:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:25.870754 containerd[1531]: 2025-10-28 23:45:25.864 [INFO][4329] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" Namespace="calico-system" Pod="calico-kube-controllers-d89cc7458-8hgnf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d89cc7458--8hgnf-eth0" Oct 28 23:45:25.882245 kubelet[2684]: E1028 23:45:25.881874 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:25.886461 kubelet[2684]: E1028 23:45:25.886390 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:25.904936 containerd[1531]: time="2025-10-28T23:45:25.904888283Z" level=info msg="connecting to shim 8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2" address="unix:///run/containerd/s/9c43fa608adb90298107dc95809a27bc84b04352fd2202c52a82dfdee00cad02" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:25.931699 systemd[1]: Started cri-containerd-8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2.scope - libcontainer container 8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2. Oct 28 23:45:25.942666 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:25.968305 containerd[1531]: time="2025-10-28T23:45:25.968268328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d89cc7458-8hgnf,Uid:37c97580-71cc-4bc9-9010-0bb18fd1ed99,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c3380fa34c3c5bba649b91becc988bb9fb93638f04c0fb33977538d3b0e64f2\"" Oct 28 23:45:25.969677 containerd[1531]: time="2025-10-28T23:45:25.969652640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:45:26.150414 containerd[1531]: time="2025-10-28T23:45:26.150358925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:26.151377 containerd[1531]: time="2025-10-28T23:45:26.151327800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:45:26.151451 containerd[1531]: time="2025-10-28T23:45:26.151405199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:45:26.151627 kubelet[2684]: E1028 23:45:26.151589 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:45:26.151684 kubelet[2684]: E1028 23:45:26.151638 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:45:26.151741 kubelet[2684]: E1028 23:45:26.151716 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d89cc7458-8hgnf_calico-system(37c97580-71cc-4bc9-9010-0bb18fd1ed99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:26.151975 kubelet[2684]: E1028 23:45:26.151756 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:26.379642 systemd-networkd[1432]: cali862b686df7d: Gained IPv6LL Oct 28 23:45:26.635688 systemd-networkd[1432]: calid1ec81d6c1e: Gained IPv6LL Oct 28 23:45:26.725054 kubelet[2684]: E1028 23:45:26.725016 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:26.726110 containerd[1531]: time="2025-10-28T23:45:26.725654448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tsp9j,Uid:c19cc983-98a3-458b-a49c-1ccea440545a,Namespace:kube-system,Attempt:0,}" Oct 28 23:45:26.728551 containerd[1531]: time="2025-10-28T23:45:26.728237554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-hbmkd,Uid:51dc260a-540b-4f02-a0f1-2e415e73ff2c,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:26.868914 systemd-networkd[1432]: calice6c216e69c: Link UP Oct 28 23:45:26.869060 systemd-networkd[1432]: calice6c216e69c: Gained carrier Oct 28 23:45:26.884785 containerd[1531]: 2025-10-28 23:45:26.785 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--tsp9j-eth0 coredns-66bc5c9577- kube-system c19cc983-98a3-458b-a49c-1ccea440545a 856 0 2025-10-28 23:44:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-tsp9j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice6c216e69c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-" Oct 28 23:45:26.884785 containerd[1531]: 2025-10-28 23:45:26.786 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.884785 containerd[1531]: 2025-10-28 23:45:26.817 [INFO][4447] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" HandleID="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Workload="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.818 [INFO][4447] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" HandleID="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Workload="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-tsp9j", "timestamp":"2025-10-28 23:45:26.817734903 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.818 [INFO][4447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.818 [INFO][4447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.818 [INFO][4447] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.834 [INFO][4447] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" host="localhost" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.840 [INFO][4447] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.845 [INFO][4447] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.847 [INFO][4447] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.850 [INFO][4447] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:26.885180 containerd[1531]: 2025-10-28 23:45:26.850 [INFO][4447] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" host="localhost" Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.852 [INFO][4447] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3 Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.856 [INFO][4447] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" host="localhost" Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4447] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" host="localhost" Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4447] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" host="localhost" Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:26.885413 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4447] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" HandleID="k8s-pod-network.5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Workload="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.866 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tsp9j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c19cc983-98a3-458b-a49c-1ccea440545a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-tsp9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice6c216e69c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.866 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.866 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice6c216e69c ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.868 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.870 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tsp9j-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c19cc983-98a3-458b-a49c-1ccea440545a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3", Pod:"coredns-66bc5c9577-tsp9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice6c216e69c", MAC:"fa:48:9e:71:92:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:26.885572 containerd[1531]: 2025-10-28 23:45:26.880 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" Namespace="kube-system" Pod="coredns-66bc5c9577-tsp9j" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tsp9j-eth0" Oct 28 23:45:26.888279 kubelet[2684]: E1028 23:45:26.888050 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:26.888279 kubelet[2684]: E1028 23:45:26.888147 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:26.890467 kubelet[2684]: E1028 23:45:26.890165 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:26.927718 containerd[1531]: time="2025-10-28T23:45:26.927661700Z" level=info msg="connecting to shim 5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3" address="unix:///run/containerd/s/223a4133fa2499e7255d6940fcf7ddc161e198ee5d3cce7018de3082dd97ee0d" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:26.961839 systemd[1]: Started cri-containerd-5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3.scope - libcontainer container 5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3. Oct 28 23:45:26.979304 systemd-networkd[1432]: cali235b0743bd4: Link UP Oct 28 23:45:26.979587 systemd-networkd[1432]: cali235b0743bd4: Gained carrier Oct 28 23:45:26.980681 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.783 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0 calico-apiserver-548d874589- calico-apiserver 51dc260a-540b-4f02-a0f1-2e415e73ff2c 862 0 2025-10-28 23:44:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:548d874589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-548d874589-hbmkd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali235b0743bd4 [] [] }} ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.783 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.829 [INFO][4440] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" HandleID="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Workload="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.830 [INFO][4440] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" HandleID="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Workload="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a18d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-548d874589-hbmkd", "timestamp":"2025-10-28 23:45:26.829862837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.830 [INFO][4440] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4440] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.863 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.940 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.947 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.954 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.957 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.960 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.960 [INFO][4440] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.962 [INFO][4440] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.966 [INFO][4440] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.974 [INFO][4440] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.974 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" host="localhost" Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.974 [INFO][4440] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:27.000257 containerd[1531]: 2025-10-28 23:45:26.974 [INFO][4440] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" HandleID="k8s-pod-network.ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Workload="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.976 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0", GenerateName:"calico-apiserver-548d874589-", Namespace:"calico-apiserver", SelfLink:"", UID:"51dc260a-540b-4f02-a0f1-2e415e73ff2c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d874589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-548d874589-hbmkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali235b0743bd4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.976 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.976 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali235b0743bd4 ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.980 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.984 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0", GenerateName:"calico-apiserver-548d874589-", Namespace:"calico-apiserver", SelfLink:"", UID:"51dc260a-540b-4f02-a0f1-2e415e73ff2c", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"548d874589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf", Pod:"calico-apiserver-548d874589-hbmkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali235b0743bd4", MAC:"de:66:3f:ef:95:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:27.001574 containerd[1531]: 2025-10-28 23:45:26.997 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" Namespace="calico-apiserver" Pod="calico-apiserver-548d874589-hbmkd" WorkloadEndpoint="localhost-k8s-calico--apiserver--548d874589--hbmkd-eth0" Oct 28 23:45:27.010159 containerd[1531]: time="2025-10-28T23:45:27.010110169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tsp9j,Uid:c19cc983-98a3-458b-a49c-1ccea440545a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3\"" Oct 28 23:45:27.011533 kubelet[2684]: E1028 23:45:27.011501 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:27.019172 containerd[1531]: time="2025-10-28T23:45:27.018779562Z" level=info msg="CreateContainer within sandbox \"5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:45:27.035472 containerd[1531]: time="2025-10-28T23:45:27.035134754Z" level=info msg="Container 461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:27.036041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330319468.mount: Deactivated successfully. Oct 28 23:45:27.036998 containerd[1531]: time="2025-10-28T23:45:27.036963864Z" level=info msg="connecting to shim ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf" address="unix:///run/containerd/s/40d414576bfa908ec8320b9ebb9587eb29a0cca9bdc92c17b993a7c95451f08c" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:27.043215 containerd[1531]: time="2025-10-28T23:45:27.043162791Z" level=info msg="CreateContainer within sandbox \"5a991dab2b9901c8f92b50474e62c7c8319a64c7a65d4bf14ddafd246fbe95c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d\"" Oct 28 23:45:27.045467 containerd[1531]: time="2025-10-28T23:45:27.044038426Z" level=info msg="StartContainer for \"461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d\"" Oct 28 23:45:27.047508 containerd[1531]: time="2025-10-28T23:45:27.047469648Z" level=info msg="connecting to shim 461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d" address="unix:///run/containerd/s/223a4133fa2499e7255d6940fcf7ddc161e198ee5d3cce7018de3082dd97ee0d" protocol=ttrpc version=3 Oct 28 23:45:27.059653 systemd[1]: Started cri-containerd-ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf.scope - libcontainer container ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf. Oct 28 23:45:27.066474 systemd[1]: Started cri-containerd-461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d.scope - libcontainer container 461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d. Oct 28 23:45:27.075784 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:27.100453 containerd[1531]: time="2025-10-28T23:45:27.100404763Z" level=info msg="StartContainer for \"461b30b2d6aa038b9335c98922d92906e6c219aaa52f01c431fb119854c3932d\" returns successfully" Oct 28 23:45:27.102397 containerd[1531]: time="2025-10-28T23:45:27.102319992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-548d874589-hbmkd,Uid:51dc260a-540b-4f02-a0f1-2e415e73ff2c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ff7d4838304f23f304d50985394f0af60d985c9be369d5faf61a20b0cc790dcf\"" Oct 28 23:45:27.104297 containerd[1531]: time="2025-10-28T23:45:27.104265662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:27.211856 systemd-networkd[1432]: cali4963d5dabc7: Gained IPv6LL Oct 28 23:45:27.337866 containerd[1531]: time="2025-10-28T23:45:27.337807365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:27.338801 containerd[1531]: time="2025-10-28T23:45:27.338763840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:27.338875 containerd[1531]: time="2025-10-28T23:45:27.338856639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:27.339035 kubelet[2684]: E1028 23:45:27.338996 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:27.339124 kubelet[2684]: E1028 23:45:27.339045 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:27.339175 kubelet[2684]: E1028 23:45:27.339122 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-hbmkd_calico-apiserver(51dc260a-540b-4f02-a0f1-2e415e73ff2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:27.339175 kubelet[2684]: E1028 23:45:27.339156 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:27.839080 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:53940.service - OpenSSH per-connection server daemon (10.0.0.1:53940). Oct 28 23:45:27.898110 kubelet[2684]: E1028 23:45:27.895067 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:27.899982 kubelet[2684]: E1028 23:45:27.898121 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:27.899982 kubelet[2684]: E1028 23:45:27.899535 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:27.945090 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 53940 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:27.948083 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:27.950975 kubelet[2684]: I1028 23:45:27.950326 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tsp9j" podStartSLOduration=39.950303708 podStartE2EDuration="39.950303708s" podCreationTimestamp="2025-10-28 23:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:45:27.948916356 +0000 UTC m=+47.311598110" watchObservedRunningTime="2025-10-28 23:45:27.950303708 +0000 UTC m=+47.312985542" Oct 28 23:45:27.957970 systemd-logind[1510]: New session 9 of user core. Oct 28 23:45:27.966654 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 23:45:28.043594 systemd-networkd[1432]: calice6c216e69c: Gained IPv6LL Oct 28 23:45:28.134895 sshd[4612]: Connection closed by 10.0.0.1 port 53940 Oct 28 23:45:28.135165 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:28.139060 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:53940.service: Deactivated successfully. Oct 28 23:45:28.140796 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 23:45:28.141599 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Oct 28 23:45:28.142716 systemd-logind[1510]: Removed session 9. Oct 28 23:45:28.555761 systemd-networkd[1432]: cali235b0743bd4: Gained IPv6LL Oct 28 23:45:28.725509 containerd[1531]: time="2025-10-28T23:45:28.724953490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c596cb8fc-mxxk5,Uid:77168d35-e1f8-4112-9d0d-c414c5ff0981,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:45:28.726853 kubelet[2684]: E1028 23:45:28.726778 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:28.727388 containerd[1531]: time="2025-10-28T23:45:28.727351957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k9xnc,Uid:3b25aa30-f019-4265-b023-79afab8fe52e,Namespace:kube-system,Attempt:0,}" Oct 28 23:45:28.852239 systemd-networkd[1432]: calib01a3d9bdfa: Link UP Oct 28 23:45:28.852581 systemd-networkd[1432]: calib01a3d9bdfa: Gained carrier Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.771 [INFO][4626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0 calico-apiserver-6c596cb8fc- calico-apiserver 77168d35-e1f8-4112-9d0d-c414c5ff0981 865 0 2025-10-28 23:44:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c596cb8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c596cb8fc-mxxk5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib01a3d9bdfa [] [] }} ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.771 [INFO][4626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.804 [INFO][4657] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" HandleID="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Workload="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.805 [INFO][4657] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" HandleID="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Workload="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059cac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c596cb8fc-mxxk5", "timestamp":"2025-10-28 23:45:28.804632629 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.806 [INFO][4657] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.806 [INFO][4657] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.806 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.817 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.822 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.827 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.829 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.832 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.832 [INFO][4657] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.834 [INFO][4657] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82 Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.838 [INFO][4657] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4657] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" host="localhost" Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4657] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:28.870897 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4657] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" HandleID="k8s-pod-network.c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Workload="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.848 [INFO][4626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0", GenerateName:"calico-apiserver-6c596cb8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"77168d35-e1f8-4112-9d0d-c414c5ff0981", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c596cb8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c596cb8fc-mxxk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib01a3d9bdfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.848 [INFO][4626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.848 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib01a3d9bdfa ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.853 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.854 [INFO][4626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0", GenerateName:"calico-apiserver-6c596cb8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"77168d35-e1f8-4112-9d0d-c414c5ff0981", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c596cb8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82", Pod:"calico-apiserver-6c596cb8fc-mxxk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib01a3d9bdfa", MAC:"a2:6b:96:53:a1:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:28.871636 containerd[1531]: 2025-10-28 23:45:28.867 [INFO][4626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" Namespace="calico-apiserver" Pod="calico-apiserver-6c596cb8fc-mxxk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c596cb8fc--mxxk5-eth0" Oct 28 23:45:28.900889 kubelet[2684]: E1028 23:45:28.900856 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:28.903657 kubelet[2684]: E1028 23:45:28.903430 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:28.926427 containerd[1531]: time="2025-10-28T23:45:28.925797068Z" level=info msg="connecting to shim c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82" address="unix:///run/containerd/s/3c8f043dd809fd641ba5695a3676d16b5a3271bd48fb7d3549a416c93cee570e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:28.953686 systemd[1]: Started cri-containerd-c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82.scope - libcontainer container c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82. Oct 28 23:45:28.968630 systemd-networkd[1432]: calid3b25275f71: Link UP Oct 28 23:45:28.968971 systemd-networkd[1432]: calid3b25275f71: Gained carrier Oct 28 23:45:28.979892 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.784 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--k9xnc-eth0 coredns-66bc5c9577- kube-system 3b25aa30-f019-4265-b023-79afab8fe52e 859 0 2025-10-28 23:44:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-k9xnc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid3b25275f71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.785 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.816 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" HandleID="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Workload="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.816 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" HandleID="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Workload="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-k9xnc", "timestamp":"2025-10-28 23:45:28.816106408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.816 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.845 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.918 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.923 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.935 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.937 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.940 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.940 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.943 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.953 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.964 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.964 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" host="localhost" Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.964 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:28.984371 containerd[1531]: 2025-10-28 23:45:28.964 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" HandleID="k8s-pod-network.a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Workload="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.966 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k9xnc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b25aa30-f019-4265-b023-79afab8fe52e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-k9xnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b25275f71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.966 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.966 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3b25275f71 ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.968 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.969 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--k9xnc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3b25aa30-f019-4265-b023-79afab8fe52e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 44, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef", Pod:"coredns-66bc5c9577-k9xnc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid3b25275f71", MAC:"e2:f2:e1:4b:d9:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:28.986872 containerd[1531]: 2025-10-28 23:45:28.979 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" Namespace="kube-system" Pod="coredns-66bc5c9577-k9xnc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--k9xnc-eth0" Oct 28 23:45:29.014272 containerd[1531]: time="2025-10-28T23:45:29.014219882Z" level=info msg="connecting to shim a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef" address="unix:///run/containerd/s/d35903eb43bd6e06faebc79cc2aba5b87007cfd3ed2e7c824125e70fef1aa442" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:29.033717 containerd[1531]: time="2025-10-28T23:45:29.033670821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c596cb8fc-mxxk5,Uid:77168d35-e1f8-4112-9d0d-c414c5ff0981,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c6066d0bc54d0f97d9f27106aeb6a56e62912a2e96436e78035df7c86bd54a82\"" Oct 28 23:45:29.039663 containerd[1531]: time="2025-10-28T23:45:29.039587111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:29.054891 systemd[1]: Started cri-containerd-a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef.scope - libcontainer container a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef. Oct 28 23:45:29.071682 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:29.100019 containerd[1531]: time="2025-10-28T23:45:29.099938717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k9xnc,Uid:3b25aa30-f019-4265-b023-79afab8fe52e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef\"" Oct 28 23:45:29.103889 kubelet[2684]: E1028 23:45:29.102901 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:29.119253 containerd[1531]: time="2025-10-28T23:45:29.119119698Z" level=info msg="CreateContainer within sandbox \"a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:45:29.132305 containerd[1531]: time="2025-10-28T23:45:29.132182390Z" level=info msg="Container a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:45:29.146476 containerd[1531]: time="2025-10-28T23:45:29.146405356Z" level=info msg="CreateContainer within sandbox \"a8fa89dc15c473dabc7ab37fa7ad7e4ad2b4ab80cd6ff3126521874ccbc6ddef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458\"" Oct 28 23:45:29.147723 containerd[1531]: time="2025-10-28T23:45:29.147686709Z" level=info msg="StartContainer for \"a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458\"" Oct 28 23:45:29.148815 containerd[1531]: time="2025-10-28T23:45:29.148783064Z" level=info msg="connecting to shim a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458" address="unix:///run/containerd/s/d35903eb43bd6e06faebc79cc2aba5b87007cfd3ed2e7c824125e70fef1aa442" protocol=ttrpc version=3 Oct 28 23:45:29.167697 systemd[1]: Started cri-containerd-a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458.scope - libcontainer container a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458. Oct 28 23:45:29.211824 containerd[1531]: time="2025-10-28T23:45:29.211709017Z" level=info msg="StartContainer for \"a7ca032c500d13cf7996b84a2cc06cfd8b4d95d92e8b3480d6ac329aaad08458\" returns successfully" Oct 28 23:45:29.286764 containerd[1531]: time="2025-10-28T23:45:29.286716467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:29.289773 containerd[1531]: time="2025-10-28T23:45:29.289693252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:29.289944 containerd[1531]: time="2025-10-28T23:45:29.289796571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:29.290182 kubelet[2684]: E1028 23:45:29.290122 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:29.290182 kubelet[2684]: E1028 23:45:29.290167 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:29.290407 kubelet[2684]: E1028 23:45:29.290385 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c596cb8fc-mxxk5_calico-apiserver(77168d35-e1f8-4112-9d0d-c414c5ff0981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:29.290766 kubelet[2684]: E1028 23:45:29.290724 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:29.725528 containerd[1531]: time="2025-10-28T23:45:29.725484029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4shv,Uid:48b595fd-60f3-4e0e-96da-2d837a2764a7,Namespace:calico-system,Attempt:0,}" Oct 28 23:45:29.905115 kubelet[2684]: E1028 23:45:29.905075 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:29.907311 kubelet[2684]: E1028 23:45:29.907282 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:29.907512 kubelet[2684]: E1028 23:45:29.907485 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:29.940283 systemd-networkd[1432]: calib47b3a4571a: Link UP Oct 28 23:45:29.940923 systemd-networkd[1432]: calib47b3a4571a: Gained carrier Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.788 [INFO][4824] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h4shv-eth0 csi-node-driver- calico-system 48b595fd-60f3-4e0e-96da-2d837a2764a7 750 0 2025-10-28 23:45:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h4shv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib47b3a4571a [] [] }} ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.788 [INFO][4824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.822 [INFO][4838] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" HandleID="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Workload="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.823 [INFO][4838] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" HandleID="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Workload="localhost-k8s-csi--node--driver--h4shv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h4shv", "timestamp":"2025-10-28 23:45:29.822991083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.823 [INFO][4838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.823 [INFO][4838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.823 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.834 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.838 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.843 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.845 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.847 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.847 [INFO][4838] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.849 [INFO][4838] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.865 [INFO][4838] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.931 [INFO][4838] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.931 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" host="localhost" Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.931 [INFO][4838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:45:30.023086 containerd[1531]: 2025-10-28 23:45:29.931 [INFO][4838] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" HandleID="k8s-pod-network.2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Workload="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:29.936 [INFO][4824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4shv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"48b595fd-60f3-4e0e-96da-2d837a2764a7", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h4shv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib47b3a4571a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:29.936 [INFO][4824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:29.936 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib47b3a4571a ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:29.941 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:29.941 [INFO][4824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4shv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"48b595fd-60f3-4e0e-96da-2d837a2764a7", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b", Pod:"csi-node-driver-h4shv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib47b3a4571a", MAC:"26:1f:80:a7:13:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:45:30.024282 containerd[1531]: 2025-10-28 23:45:30.019 [INFO][4824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" Namespace="calico-system" Pod="csi-node-driver-h4shv" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4shv-eth0" Oct 28 23:45:30.060159 kubelet[2684]: I1028 23:45:30.059399 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k9xnc" podStartSLOduration=42.05937974 podStartE2EDuration="42.05937974s" podCreationTimestamp="2025-10-28 23:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:45:30.057761589 +0000 UTC m=+49.420443383" watchObservedRunningTime="2025-10-28 23:45:30.05937974 +0000 UTC m=+49.422061534" Oct 28 23:45:30.085663 containerd[1531]: time="2025-10-28T23:45:30.085604846Z" level=info msg="connecting to shim 2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b" address="unix:///run/containerd/s/af74ad5dc4b1d2c6119ff6f0055386ae16ffb5bcff38f232ad092e38a9fc6ba5" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:45:30.117231 systemd[1]: Started cri-containerd-2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b.scope - libcontainer container 2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b. Oct 28 23:45:30.129984 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:45:30.148136 containerd[1531]: time="2025-10-28T23:45:30.148097167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4shv,Uid:48b595fd-60f3-4e0e-96da-2d837a2764a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ccdcbd51c664ca92e14ce0ce873424e7fcac42205d86ea32f159b2ebd82b68b\"" Oct 28 23:45:30.150358 containerd[1531]: time="2025-10-28T23:45:30.150279956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:45:30.155651 systemd-networkd[1432]: calib01a3d9bdfa: Gained IPv6LL Oct 28 23:45:30.368789 containerd[1531]: time="2025-10-28T23:45:30.368644921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:30.370544 containerd[1531]: time="2025-10-28T23:45:30.370446432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:45:30.370544 containerd[1531]: time="2025-10-28T23:45:30.370494952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:45:30.370808 kubelet[2684]: E1028 23:45:30.370720 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:45:30.370857 kubelet[2684]: E1028 23:45:30.370823 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:45:30.370952 kubelet[2684]: E1028 23:45:30.370909 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:30.380489 containerd[1531]: time="2025-10-28T23:45:30.380333901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:45:30.602689 containerd[1531]: time="2025-10-28T23:45:30.602599766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:30.603661 containerd[1531]: time="2025-10-28T23:45:30.603552442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:45:30.603661 containerd[1531]: time="2025-10-28T23:45:30.603622961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:45:30.603891 kubelet[2684]: E1028 23:45:30.603836 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:45:30.603891 kubelet[2684]: E1028 23:45:30.603885 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:45:30.603991 kubelet[2684]: E1028 23:45:30.603966 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:30.604051 kubelet[2684]: E1028 23:45:30.604011 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:30.795681 systemd-networkd[1432]: calid3b25275f71: Gained IPv6LL Oct 28 23:45:30.914817 kubelet[2684]: E1028 23:45:30.914753 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:30.915737 kubelet[2684]: E1028 23:45:30.915180 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:30.919165 kubelet[2684]: E1028 23:45:30.919104 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:31.180713 systemd-networkd[1432]: calib47b3a4571a: Gained IPv6LL Oct 28 23:45:31.222365 kubelet[2684]: I1028 23:45:31.222276 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 28 23:45:31.222872 kubelet[2684]: E1028 23:45:31.222850 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:31.340293 containerd[1531]: time="2025-10-28T23:45:31.340232027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\" id:\"e0097fb07daaabbe10f6e93c77ea9a4bc2796493033f71accb7348e56e1137dc\" pid:4926 exited_at:{seconds:1761695131 nanos:339861989}" Oct 28 23:45:31.436136 containerd[1531]: time="2025-10-28T23:45:31.436015546Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\" id:\"b0b268ca44c34e1cbaeb443baffd4b89cb660fd7f4b4a86fab7022708537db24\" pid:4951 exited_at:{seconds:1761695131 nanos:435694027}" Oct 28 23:45:31.915473 kubelet[2684]: E1028 23:45:31.915187 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:31.916860 kubelet[2684]: E1028 23:45:31.916594 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:31.917462 kubelet[2684]: E1028 23:45:31.917022 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:45:33.148341 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:38696.service - OpenSSH per-connection server daemon (10.0.0.1:38696). Oct 28 23:45:33.205328 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 38696 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:33.207922 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:33.216532 systemd-logind[1510]: New session 10 of user core. Oct 28 23:45:33.224685 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 23:45:33.419349 sshd[4972]: Connection closed by 10.0.0.1 port 38696 Oct 28 23:45:33.419632 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:33.431415 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:38696.service: Deactivated successfully. Oct 28 23:45:33.433474 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 23:45:33.434207 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Oct 28 23:45:33.436975 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:38704.service - OpenSSH per-connection server daemon (10.0.0.1:38704). Oct 28 23:45:33.438168 systemd-logind[1510]: Removed session 10. Oct 28 23:45:33.494970 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 38704 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:33.496357 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:33.501363 systemd-logind[1510]: New session 11 of user core. Oct 28 23:45:33.511680 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 23:45:33.745721 sshd[4990]: Connection closed by 10.0.0.1 port 38704 Oct 28 23:45:33.746514 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:33.759297 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:38704.service: Deactivated successfully. Oct 28 23:45:33.761685 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 23:45:33.763129 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Oct 28 23:45:33.766620 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:38718.service - OpenSSH per-connection server daemon (10.0.0.1:38718). Oct 28 23:45:33.769218 systemd-logind[1510]: Removed session 11. Oct 28 23:45:33.823807 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 38718 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:33.825229 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:33.829405 systemd-logind[1510]: New session 12 of user core. Oct 28 23:45:33.840655 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 23:45:33.985503 sshd[5004]: Connection closed by 10.0.0.1 port 38718 Oct 28 23:45:33.985418 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:33.989029 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:38718.service: Deactivated successfully. Oct 28 23:45:33.991791 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 23:45:33.992612 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Oct 28 23:45:33.994028 systemd-logind[1510]: Removed session 12. Oct 28 23:45:35.723705 containerd[1531]: time="2025-10-28T23:45:35.723660712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:45:35.899574 containerd[1531]: time="2025-10-28T23:45:35.899422397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:35.900518 containerd[1531]: time="2025-10-28T23:45:35.900473952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:45:35.900612 containerd[1531]: time="2025-10-28T23:45:35.900555512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:45:35.900818 kubelet[2684]: E1028 23:45:35.900775 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:45:35.901723 kubelet[2684]: E1028 23:45:35.901396 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:45:35.901723 kubelet[2684]: E1028 23:45:35.901502 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:35.902658 containerd[1531]: time="2025-10-28T23:45:35.902628502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:45:36.090314 containerd[1531]: time="2025-10-28T23:45:36.090237696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:36.091838 containerd[1531]: time="2025-10-28T23:45:36.091794129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:45:36.091887 containerd[1531]: time="2025-10-28T23:45:36.091823929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:45:36.092047 kubelet[2684]: E1028 23:45:36.092012 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:45:36.092139 kubelet[2684]: E1028 23:45:36.092078 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:45:36.092276 kubelet[2684]: E1028 23:45:36.092157 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:36.092392 kubelet[2684]: E1028 23:45:36.092203 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768f486948-gzj7l" podUID="fae43e44-3d5c-47be-b1b5-59a7cbe16d74" Oct 28 23:45:38.996981 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:38728.service - OpenSSH per-connection server daemon (10.0.0.1:38728). Oct 28 23:45:39.053363 sshd[5020]: Accepted publickey for core from 10.0.0.1 port 38728 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:39.054627 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:39.058412 systemd-logind[1510]: New session 13 of user core. Oct 28 23:45:39.074689 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 23:45:39.212912 sshd[5023]: Connection closed by 10.0.0.1 port 38728 Oct 28 23:45:39.213326 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:39.216924 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:38728.service: Deactivated successfully. Oct 28 23:45:39.218965 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 23:45:39.219747 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Oct 28 23:45:39.220840 systemd-logind[1510]: Removed session 13. Oct 28 23:45:39.724188 containerd[1531]: time="2025-10-28T23:45:39.724091600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:45:39.939265 containerd[1531]: time="2025-10-28T23:45:39.939218384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:39.940244 containerd[1531]: time="2025-10-28T23:45:39.940209699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:39.940328 containerd[1531]: time="2025-10-28T23:45:39.940257259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:45:39.940516 kubelet[2684]: E1028 23:45:39.940469 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:45:39.940811 kubelet[2684]: E1028 23:45:39.940522 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:45:39.940811 kubelet[2684]: E1028 23:45:39.940699 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rc79g_calico-system(c7a1e2dd-52c0-45e7-a13b-cfdddc111238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:39.940811 kubelet[2684]: E1028 23:45:39.940731 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:39.941653 containerd[1531]: time="2025-10-28T23:45:39.941618213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:40.159713 containerd[1531]: time="2025-10-28T23:45:40.159609111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:40.163760 containerd[1531]: time="2025-10-28T23:45:40.163714052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:40.163829 containerd[1531]: time="2025-10-28T23:45:40.163770812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:40.163993 kubelet[2684]: E1028 23:45:40.163954 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:40.164041 kubelet[2684]: E1028 23:45:40.164005 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:40.164099 kubelet[2684]: E1028 23:45:40.164080 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-gxsbt_calico-apiserver(2d30b29a-2608-4dc4-a762-9ebd83a9d186): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:40.164130 kubelet[2684]: E1028 23:45:40.164116 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:42.726040 containerd[1531]: time="2025-10-28T23:45:42.726001324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:45:42.928659 containerd[1531]: time="2025-10-28T23:45:42.928592871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:42.929534 containerd[1531]: time="2025-10-28T23:45:42.929481027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:45:42.929567 containerd[1531]: time="2025-10-28T23:45:42.929540706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:45:42.929731 kubelet[2684]: E1028 23:45:42.929682 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:45:42.929992 kubelet[2684]: E1028 23:45:42.929731 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:45:42.929992 kubelet[2684]: E1028 23:45:42.929895 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d89cc7458-8hgnf_calico-system(37c97580-71cc-4bc9-9010-0bb18fd1ed99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:42.929992 kubelet[2684]: E1028 23:45:42.929937 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:42.930161 containerd[1531]: time="2025-10-28T23:45:42.930132984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:43.123463 containerd[1531]: time="2025-10-28T23:45:43.123402016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:43.124277 containerd[1531]: time="2025-10-28T23:45:43.124236212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:43.124385 containerd[1531]: time="2025-10-28T23:45:43.124311092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:43.124528 kubelet[2684]: E1028 23:45:43.124472 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:43.124576 kubelet[2684]: E1028 23:45:43.124522 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:43.124760 kubelet[2684]: E1028 23:45:43.124727 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-hbmkd_calico-apiserver(51dc260a-540b-4f02-a0f1-2e415e73ff2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:43.124915 containerd[1531]: time="2025-10-28T23:45:43.124817210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:45:43.125248 kubelet[2684]: E1028 23:45:43.125067 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:43.339080 containerd[1531]: time="2025-10-28T23:45:43.338988473Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:43.339870 containerd[1531]: time="2025-10-28T23:45:43.339796310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:45:43.339870 containerd[1531]: time="2025-10-28T23:45:43.339837710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:45:43.340040 kubelet[2684]: E1028 23:45:43.340002 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:43.340082 kubelet[2684]: E1028 23:45:43.340048 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:45:43.340155 kubelet[2684]: E1028 23:45:43.340133 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c596cb8fc-mxxk5_calico-apiserver(77168d35-e1f8-4112-9d0d-c414c5ff0981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:43.340186 kubelet[2684]: E1028 23:45:43.340169 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:43.723570 containerd[1531]: time="2025-10-28T23:45:43.723528632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:45:43.929509 containerd[1531]: time="2025-10-28T23:45:43.929417811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:43.930410 containerd[1531]: time="2025-10-28T23:45:43.930375927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:45:43.930486 containerd[1531]: time="2025-10-28T23:45:43.930449767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:45:43.930675 kubelet[2684]: E1028 23:45:43.930599 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:45:43.930675 kubelet[2684]: E1028 23:45:43.930660 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:45:43.932043 kubelet[2684]: E1028 23:45:43.931078 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:43.933385 containerd[1531]: time="2025-10-28T23:45:43.933360874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:45:44.165569 containerd[1531]: time="2025-10-28T23:45:44.165497945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:45:44.166453 containerd[1531]: time="2025-10-28T23:45:44.166398221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:45:44.166520 containerd[1531]: time="2025-10-28T23:45:44.166408781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:45:44.166711 kubelet[2684]: E1028 23:45:44.166656 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:45:44.166711 kubelet[2684]: E1028 23:45:44.166709 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:45:44.166789 kubelet[2684]: E1028 23:45:44.166771 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:45:44.166841 kubelet[2684]: E1028 23:45:44.166810 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:45:44.228684 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:44210.service - OpenSSH per-connection server daemon (10.0.0.1:44210). Oct 28 23:45:44.288799 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 44210 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:44.289934 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:44.294324 systemd-logind[1510]: New session 14 of user core. Oct 28 23:45:44.305618 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 23:45:44.446433 sshd[5049]: Connection closed by 10.0.0.1 port 44210 Oct 28 23:45:44.446908 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:44.450292 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Oct 28 23:45:44.450691 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:44210.service: Deactivated successfully. Oct 28 23:45:44.452414 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 23:45:44.454790 systemd-logind[1510]: Removed session 14. Oct 28 23:45:46.726125 kubelet[2684]: E1028 23:45:46.725904 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768f486948-gzj7l" podUID="fae43e44-3d5c-47be-b1b5-59a7cbe16d74" Oct 28 23:45:49.459728 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:43272.service - OpenSSH per-connection server daemon (10.0.0.1:43272). Oct 28 23:45:49.533602 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 43272 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:49.535895 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:49.542357 systemd-logind[1510]: New session 15 of user core. Oct 28 23:45:49.553683 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 23:45:49.739895 sshd[5067]: Connection closed by 10.0.0.1 port 43272 Oct 28 23:45:49.740387 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:49.743915 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:43272.service: Deactivated successfully. Oct 28 23:45:49.745675 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 23:45:49.746790 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Oct 28 23:45:49.749635 systemd-logind[1510]: Removed session 15. Oct 28 23:45:50.725395 kubelet[2684]: E1028 23:45:50.724726 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:45:51.725452 kubelet[2684]: E1028 23:45:51.724965 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:45:53.723026 kubelet[2684]: E1028 23:45:53.722977 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:45:54.755847 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:43288.service - OpenSSH per-connection server daemon (10.0.0.1:43288). Oct 28 23:45:54.800849 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 43288 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:54.802203 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:54.807901 systemd-logind[1510]: New session 16 of user core. Oct 28 23:45:54.826645 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 23:45:54.962096 sshd[5088]: Connection closed by 10.0.0.1 port 43288 Oct 28 23:45:54.962532 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:54.971706 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:43288.service: Deactivated successfully. Oct 28 23:45:54.973361 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 23:45:54.974073 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Oct 28 23:45:54.976461 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:43300.service - OpenSSH per-connection server daemon (10.0.0.1:43300). Oct 28 23:45:54.978210 systemd-logind[1510]: Removed session 16. Oct 28 23:45:55.030798 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 43300 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:55.032081 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:55.036508 systemd-logind[1510]: New session 17 of user core. Oct 28 23:45:55.046649 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 23:45:55.247990 sshd[5105]: Connection closed by 10.0.0.1 port 43300 Oct 28 23:45:55.248485 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:55.255921 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:43300.service: Deactivated successfully. Oct 28 23:45:55.257877 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 23:45:55.258587 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Oct 28 23:45:55.261121 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:43306.service - OpenSSH per-connection server daemon (10.0.0.1:43306). Oct 28 23:45:55.261838 systemd-logind[1510]: Removed session 17. Oct 28 23:45:55.321020 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 43306 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:55.322136 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:55.326282 systemd-logind[1510]: New session 18 of user core. Oct 28 23:45:55.341631 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 23:45:56.004432 sshd[5120]: Connection closed by 10.0.0.1 port 43306 Oct 28 23:45:56.005139 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:56.014752 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:43306.service: Deactivated successfully. Oct 28 23:45:56.019098 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 23:45:56.020326 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Oct 28 23:45:56.027578 systemd-logind[1510]: Removed session 18. Oct 28 23:45:56.028073 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:43318.service - OpenSSH per-connection server daemon (10.0.0.1:43318). Oct 28 23:45:56.092343 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 43318 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:56.093958 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:56.098743 systemd-logind[1510]: New session 19 of user core. Oct 28 23:45:56.113649 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 23:45:56.424973 sshd[5148]: Connection closed by 10.0.0.1 port 43318 Oct 28 23:45:56.425898 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:56.437669 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:43318.service: Deactivated successfully. Oct 28 23:45:56.441978 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 23:45:56.447630 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Oct 28 23:45:56.452625 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:43328.service - OpenSSH per-connection server daemon (10.0.0.1:43328). Oct 28 23:45:56.454173 systemd-logind[1510]: Removed session 19. Oct 28 23:45:56.507061 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 43328 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:45:56.508329 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:45:56.512501 systemd-logind[1510]: New session 20 of user core. Oct 28 23:45:56.518622 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 23:45:56.652472 sshd[5163]: Connection closed by 10.0.0.1 port 43328 Oct 28 23:45:56.652948 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Oct 28 23:45:56.656874 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:43328.service: Deactivated successfully. Oct 28 23:45:56.658744 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 23:45:56.659608 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Oct 28 23:45:56.661059 systemd-logind[1510]: Removed session 20. Oct 28 23:45:56.727345 kubelet[2684]: E1028 23:45:56.727102 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:45:57.723469 kubelet[2684]: E1028 23:45:57.723405 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:45:58.727285 kubelet[2684]: E1028 23:45:58.727226 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:46:01.425827 containerd[1531]: time="2025-10-28T23:46:01.425751214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5120a56b5efdff5428b0487d3de62fbd5b2b15533b838e6315b6b9162fc3ed34\" id:\"b3f6ee295d67dfcfe464a989393c08d8e50bac03b4f43ac7a5f61c93cf121c8a\" pid:5201 exited_at:{seconds:1761695161 nanos:425382052}" Oct 28 23:46:01.675047 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:53280.service - OpenSSH per-connection server daemon (10.0.0.1:53280). Oct 28 23:46:01.722920 kubelet[2684]: E1028 23:46:01.722879 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:46:01.724168 containerd[1531]: time="2025-10-28T23:46:01.724107307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:46:01.737581 sshd[5215]: Accepted publickey for core from 10.0.0.1 port 53280 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:46:01.738823 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:46:01.744690 systemd-logind[1510]: New session 21 of user core. Oct 28 23:46:01.761655 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 23:46:01.892253 sshd[5218]: Connection closed by 10.0.0.1 port 53280 Oct 28 23:46:01.892862 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Oct 28 23:46:01.897510 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:53280.service: Deactivated successfully. Oct 28 23:46:01.899595 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 23:46:01.902718 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Oct 28 23:46:01.904569 systemd-logind[1510]: Removed session 21. Oct 28 23:46:01.929778 containerd[1531]: time="2025-10-28T23:46:01.929613517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:01.931372 containerd[1531]: time="2025-10-28T23:46:01.931261044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:46:01.931372 containerd[1531]: time="2025-10-28T23:46:01.931327765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:46:01.931537 kubelet[2684]: E1028 23:46:01.931509 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:46:01.931577 kubelet[2684]: E1028 23:46:01.931552 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:46:01.932021 kubelet[2684]: E1028 23:46:01.931622 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:01.933790 containerd[1531]: time="2025-10-28T23:46:01.933496534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:46:02.114582 containerd[1531]: time="2025-10-28T23:46:02.114304774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:02.115586 containerd[1531]: time="2025-10-28T23:46:02.115547059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:46:02.115853 containerd[1531]: time="2025-10-28T23:46:02.115728900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:46:02.116101 kubelet[2684]: E1028 23:46:02.116032 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:46:02.116167 kubelet[2684]: E1028 23:46:02.116108 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:46:02.116218 kubelet[2684]: E1028 23:46:02.116203 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768f486948-gzj7l_calico-system(fae43e44-3d5c-47be-b1b5-59a7cbe16d74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:02.116290 kubelet[2684]: E1028 23:46:02.116248 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768f486948-gzj7l" podUID="fae43e44-3d5c-47be-b1b5-59a7cbe16d74" Oct 28 23:46:05.725110 containerd[1531]: time="2025-10-28T23:46:05.725060545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:46:05.935068 containerd[1531]: time="2025-10-28T23:46:05.935006566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:06.009484 containerd[1531]: time="2025-10-28T23:46:06.009306147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:46:06.009666 containerd[1531]: time="2025-10-28T23:46:06.009367107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:46:06.009898 kubelet[2684]: E1028 23:46:06.009859 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:06.010830 kubelet[2684]: E1028 23:46:06.009911 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:06.010830 kubelet[2684]: E1028 23:46:06.010135 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-hbmkd_calico-apiserver(51dc260a-540b-4f02-a0f1-2e415e73ff2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:06.010830 kubelet[2684]: E1028 23:46:06.010185 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-hbmkd" podUID="51dc260a-540b-4f02-a0f1-2e415e73ff2c" Oct 28 23:46:06.011012 containerd[1531]: time="2025-10-28T23:46:06.010218510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:46:06.258551 containerd[1531]: time="2025-10-28T23:46:06.258497220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:06.287709 containerd[1531]: time="2025-10-28T23:46:06.287532877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:46:06.287709 containerd[1531]: time="2025-10-28T23:46:06.287626318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:46:06.288218 kubelet[2684]: E1028 23:46:06.287970 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:46:06.288218 kubelet[2684]: E1028 23:46:06.288172 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:46:06.288773 kubelet[2684]: E1028 23:46:06.288484 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-rc79g_calico-system(c7a1e2dd-52c0-45e7-a13b-cfdddc111238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:06.288773 kubelet[2684]: E1028 23:46:06.288535 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-rc79g" podUID="c7a1e2dd-52c0-45e7-a13b-cfdddc111238" Oct 28 23:46:06.289296 containerd[1531]: time="2025-10-28T23:46:06.288705841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:46:06.552100 containerd[1531]: time="2025-10-28T23:46:06.551978922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:06.556972 containerd[1531]: time="2025-10-28T23:46:06.556891738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:46:06.556972 containerd[1531]: time="2025-10-28T23:46:06.556954698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:46:06.557518 kubelet[2684]: E1028 23:46:06.557281 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:06.557518 kubelet[2684]: E1028 23:46:06.557327 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:06.557518 kubelet[2684]: E1028 23:46:06.557404 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-548d874589-gxsbt_calico-apiserver(2d30b29a-2608-4dc4-a762-9ebd83a9d186): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:06.557662 kubelet[2684]: E1028 23:46:06.557638 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-548d874589-gxsbt" podUID="2d30b29a-2608-4dc4-a762-9ebd83a9d186" Oct 28 23:46:06.905617 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:53288.service - OpenSSH per-connection server daemon (10.0.0.1:53288). Oct 28 23:46:06.965376 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 53288 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:46:06.967161 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:46:06.975146 systemd-logind[1510]: New session 22 of user core. Oct 28 23:46:06.980649 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 28 23:46:07.118271 sshd[5234]: Connection closed by 10.0.0.1 port 53288 Oct 28 23:46:07.120203 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Oct 28 23:46:07.123927 systemd[1]: session-22.scope: Deactivated successfully. Oct 28 23:46:07.124687 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Oct 28 23:46:07.125058 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:53288.service: Deactivated successfully. Oct 28 23:46:07.127974 systemd-logind[1510]: Removed session 22. Oct 28 23:46:08.725503 containerd[1531]: time="2025-10-28T23:46:08.725462553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:46:08.909889 containerd[1531]: time="2025-10-28T23:46:08.909831024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:08.910896 containerd[1531]: time="2025-10-28T23:46:08.910844707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:46:08.910937 containerd[1531]: time="2025-10-28T23:46:08.910887587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:46:08.911088 kubelet[2684]: E1028 23:46:08.911052 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:08.911397 kubelet[2684]: E1028 23:46:08.911098 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:46:08.911397 kubelet[2684]: E1028 23:46:08.911180 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6c596cb8fc-mxxk5_calico-apiserver(77168d35-e1f8-4112-9d0d-c414c5ff0981): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:08.911397 kubelet[2684]: E1028 23:46:08.911222 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c596cb8fc-mxxk5" podUID="77168d35-e1f8-4112-9d0d-c414c5ff0981" Oct 28 23:46:11.723741 containerd[1531]: time="2025-10-28T23:46:11.723697020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:46:11.901102 containerd[1531]: time="2025-10-28T23:46:11.901046183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:11.902056 containerd[1531]: time="2025-10-28T23:46:11.902019065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:46:11.902170 containerd[1531]: time="2025-10-28T23:46:11.902087906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:46:11.902387 kubelet[2684]: E1028 23:46:11.902340 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:46:11.902679 kubelet[2684]: E1028 23:46:11.902396 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:46:11.903070 kubelet[2684]: E1028 23:46:11.902535 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:11.903610 containerd[1531]: time="2025-10-28T23:46:11.903380349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:46:12.119759 containerd[1531]: time="2025-10-28T23:46:12.119659311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:12.120632 containerd[1531]: time="2025-10-28T23:46:12.120587833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:46:12.120694 containerd[1531]: time="2025-10-28T23:46:12.120681433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:46:12.120959 kubelet[2684]: E1028 23:46:12.120905 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:46:12.120959 kubelet[2684]: E1028 23:46:12.120957 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:46:12.121199 kubelet[2684]: E1028 23:46:12.121158 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d89cc7458-8hgnf_calico-system(37c97580-71cc-4bc9-9010-0bb18fd1ed99): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:12.121281 kubelet[2684]: E1028 23:46:12.121201 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d89cc7458-8hgnf" podUID="37c97580-71cc-4bc9-9010-0bb18fd1ed99" Oct 28 23:46:12.121367 containerd[1531]: time="2025-10-28T23:46:12.121254675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:46:12.133636 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Oct 28 23:46:12.192203 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:G4gCTb8AeJlPbCJKutsl1VHntZQjxyVevMdNsK7D5Ns Oct 28 23:46:12.193973 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:46:12.198130 systemd-logind[1510]: New session 23 of user core. Oct 28 23:46:12.204610 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 28 23:46:12.339341 containerd[1531]: time="2025-10-28T23:46:12.339281946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:46:12.343943 containerd[1531]: time="2025-10-28T23:46:12.343891477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:46:12.344152 containerd[1531]: time="2025-10-28T23:46:12.343959757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:46:12.344304 kubelet[2684]: E1028 23:46:12.344257 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:46:12.344395 kubelet[2684]: E1028 23:46:12.344380 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:46:12.344552 kubelet[2684]: E1028 23:46:12.344533 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-h4shv_calico-system(48b595fd-60f3-4e0e-96da-2d837a2764a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:46:12.344712 kubelet[2684]: E1028 23:46:12.344681 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4shv" podUID="48b595fd-60f3-4e0e-96da-2d837a2764a7" Oct 28 23:46:12.358605 sshd[5252]: Connection closed by 10.0.0.1 port 58002 Oct 28 23:46:12.358900 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Oct 28 23:46:12.364315 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:58002.service: Deactivated successfully. Oct 28 23:46:12.366858 systemd[1]: session-23.scope: Deactivated successfully. Oct 28 23:46:12.368603 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Oct 28 23:46:12.370240 systemd-logind[1510]: Removed session 23. Oct 28 23:46:13.722971 kubelet[2684]: E1028 23:46:13.722582 2684 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"