Oct 30 13:01:52.312390 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 30 13:01:52.312414 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Thu Oct 30 11:31:24 -00 2025 Oct 30 13:01:52.312422 kernel: KASLR enabled Oct 30 13:01:52.312428 kernel: efi: EFI v2.7 by EDK II Oct 30 13:01:52.312434 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 30 13:01:52.312440 kernel: random: crng init done Oct 30 13:01:52.312447 kernel: secureboot: Secure boot disabled Oct 30 13:01:52.312453 kernel: ACPI: Early table checksum verification disabled Oct 30 13:01:52.312460 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 30 13:01:52.312466 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 30 13:01:52.312472 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312478 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312484 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312498 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312509 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312516 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312522 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312528 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312535 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 13:01:52.312541 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 30 13:01:52.312547 kernel: ACPI: Use ACPI SPCR as default console: No Oct 30 13:01:52.312554 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 13:01:52.312561 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 30 13:01:52.312568 kernel: Zone ranges: Oct 30 13:01:52.312574 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 13:01:52.312580 kernel: DMA32 empty Oct 30 13:01:52.312587 kernel: Normal empty Oct 30 13:01:52.312593 kernel: Device empty Oct 30 13:01:52.312599 kernel: Movable zone start for each node Oct 30 13:01:52.312605 kernel: Early memory node ranges Oct 30 13:01:52.312612 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 30 13:01:52.312618 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 30 13:01:52.312624 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 30 13:01:52.312631 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 30 13:01:52.312638 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 30 13:01:52.312645 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 30 13:01:52.312651 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 30 13:01:52.312657 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 30 13:01:52.312663 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 30 13:01:52.312670 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 30 13:01:52.312680 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 30 13:01:52.312686 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 30 13:01:52.312693 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 30 13:01:52.312700 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 30 13:01:52.312706 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 30 13:01:52.312713 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 30 13:01:52.312720 kernel: psci: probing for conduit method from ACPI. Oct 30 13:01:52.312727 kernel: psci: PSCIv1.1 detected in firmware. Oct 30 13:01:52.312734 kernel: psci: Using standard PSCI v0.2 function IDs Oct 30 13:01:52.312741 kernel: psci: Trusted OS migration not required Oct 30 13:01:52.312748 kernel: psci: SMC Calling Convention v1.1 Oct 30 13:01:52.312755 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 30 13:01:52.312762 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 30 13:01:52.312769 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 30 13:01:52.312776 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 30 13:01:52.312782 kernel: Detected PIPT I-cache on CPU0 Oct 30 13:01:52.312789 kernel: CPU features: detected: GIC system register CPU interface Oct 30 13:01:52.312796 kernel: CPU features: detected: Spectre-v4 Oct 30 13:01:52.312802 kernel: CPU features: detected: Spectre-BHB Oct 30 13:01:52.312810 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 30 13:01:52.312817 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 30 13:01:52.312824 kernel: CPU features: detected: ARM erratum 1418040 Oct 30 13:01:52.312831 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 30 13:01:52.312837 kernel: alternatives: applying boot alternatives Oct 30 13:01:52.312845 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=40a4ce45c98d9ce433e45e6a01680c5f2d7ea331751961c11527ad47f57b4bef Oct 30 13:01:52.312852 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 13:01:52.312859 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 13:01:52.312866 kernel: Fallback order for Node 0: 0 Oct 30 13:01:52.312873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 30 13:01:52.313226 kernel: Policy zone: DMA Oct 30 13:01:52.313237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 13:01:52.313244 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 30 13:01:52.313251 kernel: software IO TLB: area num 4. Oct 30 13:01:52.313258 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 30 13:01:52.313265 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 30 13:01:52.313272 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 13:01:52.313278 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 13:01:52.313286 kernel: rcu: RCU event tracing is enabled. Oct 30 13:01:52.313293 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 13:01:52.313300 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 13:01:52.313309 kernel: Tracing variant of Tasks RCU enabled. Oct 30 13:01:52.313316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 13:01:52.313323 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 13:01:52.313330 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:01:52.313337 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 13:01:52.313344 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 30 13:01:52.313351 kernel: GICv3: 256 SPIs implemented Oct 30 13:01:52.313357 kernel: GICv3: 0 Extended SPIs implemented Oct 30 13:01:52.313364 kernel: Root IRQ handler: gic_handle_irq Oct 30 13:01:52.313371 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 30 13:01:52.313378 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 30 13:01:52.313386 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 30 13:01:52.313393 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 30 13:01:52.313400 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 30 13:01:52.313407 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 30 13:01:52.313413 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 30 13:01:52.313420 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 30 13:01:52.313427 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 13:01:52.313434 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 13:01:52.313441 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 30 13:01:52.313448 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 30 13:01:52.313455 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 30 13:01:52.313463 kernel: arm-pv: using stolen time PV Oct 30 13:01:52.313470 kernel: Console: colour dummy device 80x25 Oct 30 13:01:52.313478 kernel: ACPI: Core revision 20240827 Oct 30 13:01:52.313485 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 30 13:01:52.313502 kernel: pid_max: default: 32768 minimum: 301 Oct 30 13:01:52.313510 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 13:01:52.313517 kernel: landlock: Up and running. Oct 30 13:01:52.313524 kernel: SELinux: Initializing. Oct 30 13:01:52.313533 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:01:52.313541 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 13:01:52.313548 kernel: rcu: Hierarchical SRCU implementation. Oct 30 13:01:52.313555 kernel: rcu: Max phase no-delay instances is 400. Oct 30 13:01:52.313563 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 13:01:52.313570 kernel: Remapping and enabling EFI services. Oct 30 13:01:52.313577 kernel: smp: Bringing up secondary CPUs ... Oct 30 13:01:52.313585 kernel: Detected PIPT I-cache on CPU1 Oct 30 13:01:52.313597 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 30 13:01:52.313606 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 30 13:01:52.313614 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 13:01:52.313622 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 30 13:01:52.313629 kernel: Detected PIPT I-cache on CPU2 Oct 30 13:01:52.313636 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 30 13:01:52.313645 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 30 13:01:52.313653 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 13:01:52.313660 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 30 13:01:52.313668 kernel: Detected PIPT I-cache on CPU3 Oct 30 13:01:52.313676 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 30 13:01:52.313683 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 30 13:01:52.313691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 30 13:01:52.313700 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 30 13:01:52.313707 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 13:01:52.313715 kernel: SMP: Total of 4 processors activated. Oct 30 13:01:52.313723 kernel: CPU: All CPU(s) started at EL1 Oct 30 13:01:52.313730 kernel: CPU features: detected: 32-bit EL0 Support Oct 30 13:01:52.313745 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 30 13:01:52.313753 kernel: CPU features: detected: Common not Private translations Oct 30 13:01:52.313771 kernel: CPU features: detected: CRC32 instructions Oct 30 13:01:52.313778 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 30 13:01:52.313786 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 30 13:01:52.313793 kernel: CPU features: detected: LSE atomic instructions Oct 30 13:01:52.313801 kernel: CPU features: detected: Privileged Access Never Oct 30 13:01:52.313808 kernel: CPU features: detected: RAS Extension Support Oct 30 13:01:52.313816 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 30 13:01:52.313825 kernel: alternatives: applying system-wide alternatives Oct 30 13:01:52.313833 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 30 13:01:52.313841 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 30 13:01:52.313849 kernel: devtmpfs: initialized Oct 30 13:01:52.313856 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 13:01:52.313864 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 13:01:52.313871 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 30 13:01:52.313891 kernel: 0 pages in range for non-PLT usage Oct 30 13:01:52.313900 kernel: 515056 pages in range for PLT usage Oct 30 13:01:52.313907 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 13:01:52.313915 kernel: SMBIOS 3.0.0 present. Oct 30 13:01:52.313922 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 30 13:01:52.313929 kernel: DMI: Memory slots populated: 1/1 Oct 30 13:01:52.313937 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 13:01:52.313944 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 30 13:01:52.313954 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 30 13:01:52.313961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 30 13:01:52.313969 kernel: audit: initializing netlink subsys (disabled) Oct 30 13:01:52.313976 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 30 13:01:52.313984 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 13:01:52.313991 kernel: cpuidle: using governor menu Oct 30 13:01:52.313999 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 30 13:01:52.314007 kernel: ASID allocator initialised with 32768 entries Oct 30 13:01:52.314015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 13:01:52.314022 kernel: Serial: AMBA PL011 UART driver Oct 30 13:01:52.314030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 13:01:52.314037 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 13:01:52.314045 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 30 13:01:52.314052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 30 13:01:52.314061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 13:01:52.314069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 13:01:52.314076 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 30 13:01:52.314084 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 30 13:01:52.314091 kernel: ACPI: Added _OSI(Module Device) Oct 30 13:01:52.314098 kernel: ACPI: Added _OSI(Processor Device) Oct 30 13:01:52.314106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 13:01:52.314113 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 13:01:52.314122 kernel: ACPI: Interpreter enabled Oct 30 13:01:52.314129 kernel: ACPI: Using GIC for interrupt routing Oct 30 13:01:52.314136 kernel: ACPI: MCFG table detected, 1 entries Oct 30 13:01:52.314144 kernel: ACPI: CPU0 has been hot-added Oct 30 13:01:52.314151 kernel: ACPI: CPU1 has been hot-added Oct 30 13:01:52.314158 kernel: ACPI: CPU2 has been hot-added Oct 30 13:01:52.314166 kernel: ACPI: CPU3 has been hot-added Oct 30 13:01:52.314174 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 30 13:01:52.314182 kernel: printk: legacy console [ttyAMA0] enabled Oct 30 13:01:52.314189 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 13:01:52.314357 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 13:01:52.314447 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 30 13:01:52.314753 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 30 13:01:52.314863 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 30 13:01:52.314970 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 30 13:01:52.314982 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 30 13:01:52.314990 kernel: PCI host bridge to bus 0000:00 Oct 30 13:01:52.315074 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 30 13:01:52.315147 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 30 13:01:52.315224 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 30 13:01:52.315295 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 13:01:52.315390 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 30 13:01:52.315479 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 13:01:52.315580 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 30 13:01:52.315663 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 30 13:01:52.315742 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 30 13:01:52.315821 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 30 13:01:52.315913 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 30 13:01:52.315995 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 30 13:01:52.316070 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 30 13:01:52.316146 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 30 13:01:52.316218 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 30 13:01:52.316228 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 30 13:01:52.316236 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 30 13:01:52.316243 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 30 13:01:52.316251 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 30 13:01:52.316258 kernel: iommu: Default domain type: Translated Oct 30 13:01:52.316268 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 30 13:01:52.316275 kernel: efivars: Registered efivars operations Oct 30 13:01:52.316283 kernel: vgaarb: loaded Oct 30 13:01:52.316290 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 30 13:01:52.316297 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 13:01:52.316305 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 13:01:52.316312 kernel: pnp: PnP ACPI init Oct 30 13:01:52.316403 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 30 13:01:52.316414 kernel: pnp: PnP ACPI: found 1 devices Oct 30 13:01:52.316422 kernel: NET: Registered PF_INET protocol family Oct 30 13:01:52.316429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 13:01:52.316437 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 13:01:52.316445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 13:01:52.316452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 13:01:52.316462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 13:01:52.316469 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 13:01:52.316477 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:01:52.316484 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 13:01:52.316500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 13:01:52.316508 kernel: PCI: CLS 0 bytes, default 64 Oct 30 13:01:52.316515 kernel: kvm [1]: HYP mode not available Oct 30 13:01:52.316524 kernel: Initialise system trusted keyrings Oct 30 13:01:52.316532 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 13:01:52.316539 kernel: Key type asymmetric registered Oct 30 13:01:52.316547 kernel: Asymmetric key parser 'x509' registered Oct 30 13:01:52.316554 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 30 13:01:52.316562 kernel: io scheduler mq-deadline registered Oct 30 13:01:52.316569 kernel: io scheduler kyber registered Oct 30 13:01:52.316578 kernel: io scheduler bfq registered Oct 30 13:01:52.316585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 30 13:01:52.316593 kernel: ACPI: button: Power Button [PWRB] Oct 30 13:01:52.316601 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 30 13:01:52.316684 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 30 13:01:52.316694 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 13:01:52.316702 kernel: thunder_xcv, ver 1.0 Oct 30 13:01:52.316711 kernel: thunder_bgx, ver 1.0 Oct 30 13:01:52.316718 kernel: nicpf, ver 1.0 Oct 30 13:01:52.316726 kernel: nicvf, ver 1.0 Oct 30 13:01:52.316815 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 30 13:01:52.316906 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-30T13:01:51 UTC (1761829311) Oct 30 13:01:52.316918 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 30 13:01:52.316928 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 30 13:01:52.316935 kernel: watchdog: NMI not fully supported Oct 30 13:01:52.316943 kernel: watchdog: Hard watchdog permanently disabled Oct 30 13:01:52.316951 kernel: NET: Registered PF_INET6 protocol family Oct 30 13:01:52.316959 kernel: Segment Routing with IPv6 Oct 30 13:01:52.316966 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 13:01:52.316974 kernel: NET: Registered PF_PACKET protocol family Oct 30 13:01:52.316982 kernel: Key type dns_resolver registered Oct 30 13:01:52.316991 kernel: registered taskstats version 1 Oct 30 13:01:52.316999 kernel: Loading compiled-in X.509 certificates Oct 30 13:01:52.317007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 35f23639015c23440438f2e7766a7a071061b3cd' Oct 30 13:01:52.317015 kernel: Demotion targets for Node 0: null Oct 30 13:01:52.317022 kernel: Key type .fscrypt registered Oct 30 13:01:52.317030 kernel: Key type fscrypt-provisioning registered Oct 30 13:01:52.317038 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 13:01:52.317047 kernel: ima: Allocated hash algorithm: sha1 Oct 30 13:01:52.317054 kernel: ima: No architecture policies found Oct 30 13:01:52.317062 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 30 13:01:52.317070 kernel: clk: Disabling unused clocks Oct 30 13:01:52.317077 kernel: PM: genpd: Disabling unused power domains Oct 30 13:01:52.317085 kernel: Freeing unused kernel memory: 12992K Oct 30 13:01:52.317093 kernel: Run /init as init process Oct 30 13:01:52.317102 kernel: with arguments: Oct 30 13:01:52.317109 kernel: /init Oct 30 13:01:52.317117 kernel: with environment: Oct 30 13:01:52.317124 kernel: HOME=/ Oct 30 13:01:52.317132 kernel: TERM=linux Oct 30 13:01:52.317239 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 30 13:01:52.317338 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 13:01:52.317352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 13:01:52.317359 kernel: GPT:16515071 != 27000831 Oct 30 13:01:52.317367 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 13:01:52.317374 kernel: GPT:16515071 != 27000831 Oct 30 13:01:52.317381 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 13:01:52.317389 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 13:01:52.317397 kernel: SCSI subsystem initialized Oct 30 13:01:52.317405 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 13:01:52.317413 kernel: device-mapper: uevent: version 1.0.3 Oct 30 13:01:52.317421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 13:01:52.317428 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 30 13:01:52.317436 kernel: raid6: neonx8 gen() 15738 MB/s Oct 30 13:01:52.317443 kernel: raid6: neonx4 gen() 15817 MB/s Oct 30 13:01:52.317452 kernel: raid6: neonx2 gen() 13299 MB/s Oct 30 13:01:52.317459 kernel: raid6: neonx1 gen() 10560 MB/s Oct 30 13:01:52.317467 kernel: raid6: int64x8 gen() 6914 MB/s Oct 30 13:01:52.317475 kernel: raid6: int64x4 gen() 7362 MB/s Oct 30 13:01:52.317482 kernel: raid6: int64x2 gen() 6112 MB/s Oct 30 13:01:52.317496 kernel: raid6: int64x1 gen() 5061 MB/s Oct 30 13:01:52.317506 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Oct 30 13:01:52.317516 kernel: raid6: .... xor() 12352 MB/s, rmw enabled Oct 30 13:01:52.317523 kernel: raid6: using neon recovery algorithm Oct 30 13:01:52.317531 kernel: xor: measuring software checksum speed Oct 30 13:01:52.317538 kernel: 8regs : 19884 MB/sec Oct 30 13:01:52.317545 kernel: 32regs : 21630 MB/sec Oct 30 13:01:52.317553 kernel: arm64_neon : 28003 MB/sec Oct 30 13:01:52.317560 kernel: xor: using function: arm64_neon (28003 MB/sec) Oct 30 13:01:52.317568 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 13:01:52.317577 kernel: BTRFS: device fsid 9ed8f3ce-795b-4374-89cf-81834540bd6d devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (206) Oct 30 13:01:52.317585 kernel: BTRFS info (device dm-0): first mount of filesystem 9ed8f3ce-795b-4374-89cf-81834540bd6d Oct 30 13:01:52.317592 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 30 13:01:52.317600 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 13:01:52.317607 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 13:01:52.317615 kernel: loop: module loaded Oct 30 13:01:52.317623 kernel: loop0: detected capacity change from 0 to 91480 Oct 30 13:01:52.317631 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 13:01:52.317640 systemd[1]: Successfully made /usr/ read-only. Oct 30 13:01:52.317650 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:01:52.317659 systemd[1]: Detected virtualization kvm. Oct 30 13:01:52.317666 systemd[1]: Detected architecture arm64. Oct 30 13:01:52.317675 systemd[1]: Running in initrd. Oct 30 13:01:52.317683 systemd[1]: No hostname configured, using default hostname. Oct 30 13:01:52.317692 systemd[1]: Hostname set to . Oct 30 13:01:52.317700 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:01:52.317708 systemd[1]: Queued start job for default target initrd.target. Oct 30 13:01:52.317716 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:01:52.317724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:01:52.317734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:01:52.317742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 13:01:52.317751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:01:52.317759 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 13:01:52.317768 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 13:01:52.317778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:01:52.317786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:01:52.317794 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:01:52.317802 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:01:52.317810 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:01:52.317818 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:01:52.317826 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:01:52.317836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:01:52.317844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:01:52.317852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 13:01:52.317860 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 13:01:52.317884 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:01:52.317897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:01:52.317906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:01:52.317914 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:01:52.317923 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 13:01:52.317932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 13:01:52.317941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:01:52.317949 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 13:01:52.317959 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 13:01:52.317968 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 13:01:52.317976 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:01:52.317985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:01:52.317993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:01:52.318007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 13:01:52.318018 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:01:52.318027 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 13:01:52.318035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:01:52.318068 systemd-journald[345]: Collecting audit messages is disabled. Oct 30 13:01:52.318090 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 13:01:52.318098 kernel: Bridge firewalling registered Oct 30 13:01:52.318107 systemd-journald[345]: Journal started Oct 30 13:01:52.318126 systemd-journald[345]: Runtime Journal (/run/log/journal/495d50383ecd48329d6c16c60ecbf8d0) is 6M, max 48.5M, 42.4M free. Oct 30 13:01:52.317962 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 30 13:01:52.320146 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:01:52.322923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:01:52.325073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:01:52.327789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:01:52.332030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 13:01:52.333893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:01:52.336931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:01:52.347889 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:01:52.356051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:01:52.356678 systemd-tmpfiles[370]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 13:01:52.358283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:01:52.361573 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:01:52.364792 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:01:52.368733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:01:52.376398 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 13:01:52.391150 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=40a4ce45c98d9ce433e45e6a01680c5f2d7ea331751961c11527ad47f57b4bef Oct 30 13:01:52.413225 systemd-resolved[384]: Positive Trust Anchors: Oct 30 13:01:52.413244 systemd-resolved[384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:01:52.413247 systemd-resolved[384]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:01:52.413277 systemd-resolved[384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:01:52.435668 systemd-resolved[384]: Defaulting to hostname 'linux'. Oct 30 13:01:52.436612 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:01:52.438210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:01:52.468907 kernel: Loading iSCSI transport class v2.0-870. Oct 30 13:01:52.477901 kernel: iscsi: registered transport (tcp) Oct 30 13:01:52.491266 kernel: iscsi: registered transport (qla4xxx) Oct 30 13:01:52.491299 kernel: QLogic iSCSI HBA Driver Oct 30 13:01:52.511932 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:01:52.542065 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:01:52.545128 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:01:52.588577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 13:01:52.591168 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 13:01:52.592997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 13:01:52.629195 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:01:52.631969 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:01:52.662636 systemd-udevd[629]: Using default interface naming scheme 'v257'. Oct 30 13:01:52.670497 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:01:52.673721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 13:01:52.696923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:01:52.699597 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Oct 30 13:01:52.700058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:01:52.721793 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:01:52.724229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:01:52.747670 systemd-networkd[741]: lo: Link UP Oct 30 13:01:52.747678 systemd-networkd[741]: lo: Gained carrier Oct 30 13:01:52.748172 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:01:52.749797 systemd[1]: Reached target network.target - Network. Oct 30 13:01:52.779842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:01:52.783088 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 13:01:52.837392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 13:01:52.845694 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 13:01:52.863356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 13:01:52.871592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:01:52.876215 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 13:01:52.877683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:01:52.877792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:01:52.881260 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:01:52.883849 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:01:52.883853 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:01:52.884293 systemd-networkd[741]: eth0: Link UP Oct 30 13:01:52.884929 systemd-networkd[741]: eth0: Gained carrier Oct 30 13:01:52.884940 systemd-networkd[741]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:01:52.898555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:01:52.904282 disk-uuid[803]: Primary Header is updated. Oct 30 13:01:52.904282 disk-uuid[803]: Secondary Entries is updated. Oct 30 13:01:52.904282 disk-uuid[803]: Secondary Header is updated. Oct 30 13:01:52.909257 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:01:52.925087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 13:01:52.931044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:01:52.941387 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:01:52.943196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:01:52.945995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:01:52.955040 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 13:01:52.984046 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:01:53.931224 disk-uuid[805]: Warning: The kernel is still using the old partition table. Oct 30 13:01:53.931224 disk-uuid[805]: The new table will be used at the next reboot or after you Oct 30 13:01:53.931224 disk-uuid[805]: run partprobe(8) or kpartx(8) Oct 30 13:01:53.931224 disk-uuid[805]: The operation has completed successfully. Oct 30 13:01:53.940945 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 13:01:53.942087 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 13:01:53.944345 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 13:01:53.949997 systemd-networkd[741]: eth0: Gained IPv6LL Oct 30 13:01:53.972182 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Oct 30 13:01:53.972222 kernel: BTRFS info (device vda6): first mount of filesystem 7ee39921-9599-46fb-a8d3-5abbd8930a87 Oct 30 13:01:53.972239 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 13:01:53.975912 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:01:53.975954 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:01:53.981895 kernel: BTRFS info (device vda6): last unmount of filesystem 7ee39921-9599-46fb-a8d3-5abbd8930a87 Oct 30 13:01:53.982164 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 13:01:53.984296 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 13:01:54.072624 ignition[853]: Ignition 2.22.0 Oct 30 13:01:54.072640 ignition[853]: Stage: fetch-offline Oct 30 13:01:54.072683 ignition[853]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:54.072692 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:54.072838 ignition[853]: parsed url from cmdline: "" Oct 30 13:01:54.072841 ignition[853]: no config URL provided Oct 30 13:01:54.072845 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 13:01:54.072852 ignition[853]: no config at "/usr/lib/ignition/user.ign" Oct 30 13:01:54.072916 ignition[853]: op(1): [started] loading QEMU firmware config module Oct 30 13:01:54.072921 ignition[853]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 13:01:54.078433 ignition[853]: op(1): [finished] loading QEMU firmware config module Oct 30 13:01:54.125032 ignition[853]: parsing config with SHA512: 54b14ef31b56afc278d1bbdfac164821fc2ea7248e34518a74fe61cf5a8982753841a090b8c4f37bf972e1d3dd223beb2d122f85908ee0fa89aa8bd0b06c1013 Oct 30 13:01:54.130715 unknown[853]: fetched base config from "system" Oct 30 13:01:54.130728 unknown[853]: fetched user config from "qemu" Oct 30 13:01:54.131190 ignition[853]: fetch-offline: fetch-offline passed Oct 30 13:01:54.131247 ignition[853]: Ignition finished successfully Oct 30 13:01:54.134777 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:01:54.136783 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 13:01:54.137611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 13:01:54.184067 ignition[866]: Ignition 2.22.0 Oct 30 13:01:54.184085 ignition[866]: Stage: kargs Oct 30 13:01:54.184371 ignition[866]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:54.184380 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:54.185106 ignition[866]: kargs: kargs passed Oct 30 13:01:54.185146 ignition[866]: Ignition finished successfully Oct 30 13:01:54.189228 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 13:01:54.191839 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 13:01:54.220348 ignition[875]: Ignition 2.22.0 Oct 30 13:01:54.220366 ignition[875]: Stage: disks Oct 30 13:01:54.220518 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:54.220526 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:54.223306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 13:01:54.221297 ignition[875]: disks: disks passed Oct 30 13:01:54.225149 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 13:01:54.221345 ignition[875]: Ignition finished successfully Oct 30 13:01:54.227009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 13:01:54.228788 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:01:54.230891 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:01:54.232666 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:01:54.235687 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 13:01:54.262397 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 13:01:54.267282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 13:01:54.270990 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 13:01:54.338514 kernel: EXT4-fs (vda9): mounted filesystem fd7f9a08-63ed-43be-9cb0-aa8455dcb677 r/w with ordered data mode. Quota mode: none. Oct 30 13:01:54.337593 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 13:01:54.339065 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 13:01:54.341840 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:01:54.343705 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 13:01:54.344927 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 13:01:54.344963 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 13:01:54.345007 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:01:54.359374 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 13:01:54.362163 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 13:01:54.367588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Oct 30 13:01:54.367612 kernel: BTRFS info (device vda6): first mount of filesystem 7ee39921-9599-46fb-a8d3-5abbd8930a87 Oct 30 13:01:54.367623 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 13:01:54.372902 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:01:54.372941 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:01:54.371787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:01:54.400718 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 13:01:54.405297 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Oct 30 13:01:54.408551 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 13:01:54.411693 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 13:01:54.486557 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 13:01:54.489222 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 13:01:54.491047 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 13:01:54.515914 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 13:01:54.517624 kernel: BTRFS info (device vda6): last unmount of filesystem 7ee39921-9599-46fb-a8d3-5abbd8930a87 Oct 30 13:01:54.527155 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 13:01:54.543359 ignition[1009]: INFO : Ignition 2.22.0 Oct 30 13:01:54.543359 ignition[1009]: INFO : Stage: mount Oct 30 13:01:54.545196 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:54.545196 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:54.545196 ignition[1009]: INFO : mount: mount passed Oct 30 13:01:54.545196 ignition[1009]: INFO : Ignition finished successfully Oct 30 13:01:54.546459 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 13:01:54.550600 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 13:01:55.339100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 13:01:55.359067 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1022) Oct 30 13:01:55.359106 kernel: BTRFS info (device vda6): first mount of filesystem 7ee39921-9599-46fb-a8d3-5abbd8930a87 Oct 30 13:01:55.359118 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 30 13:01:55.362903 kernel: BTRFS info (device vda6): turning on async discard Oct 30 13:01:55.362926 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 13:01:55.364198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 13:01:55.402081 ignition[1040]: INFO : Ignition 2.22.0 Oct 30 13:01:55.402081 ignition[1040]: INFO : Stage: files Oct 30 13:01:55.403811 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:55.403811 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:55.403811 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Oct 30 13:01:55.407817 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 13:01:55.407817 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 13:01:55.407817 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 13:01:55.407817 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 13:01:55.407817 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 13:01:55.407068 unknown[1040]: wrote ssh authorized keys file for user: core Oct 30 13:01:55.416176 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 30 13:01:55.416176 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 30 13:01:55.470199 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 13:01:55.604115 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 30 13:01:55.604115 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 13:01:55.608327 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 30 13:01:55.627588 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 30 13:01:55.627588 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 30 13:01:55.627588 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 30 13:01:56.050484 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 13:01:56.261941 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 30 13:01:56.261941 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 13:01:56.266148 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 13:01:56.282095 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 13:01:56.283706 ignition[1040]: INFO : files: files passed Oct 30 13:01:56.283706 ignition[1040]: INFO : Ignition finished successfully Oct 30 13:01:56.288504 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 13:01:56.291703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 13:01:56.293956 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 13:01:56.302041 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 13:01:56.302127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 13:01:56.306746 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 13:01:56.308238 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:01:56.308238 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:01:56.312587 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 13:01:56.308659 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:01:56.311443 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 13:01:56.314359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 13:01:56.356859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 13:01:56.357960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 13:01:56.359425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 13:01:56.361427 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 13:01:56.363574 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 13:01:56.364318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 13:01:56.393696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:01:56.396259 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 13:01:56.423680 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 13:01:56.423903 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:01:56.426320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:01:56.428547 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 13:01:56.430497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 13:01:56.430624 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 13:01:56.433339 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 13:01:56.435528 systemd[1]: Stopped target basic.target - Basic System. Oct 30 13:01:56.437321 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 13:01:56.439183 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 13:01:56.441251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 13:01:56.443406 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 13:01:56.445570 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 13:01:56.447703 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 13:01:56.449936 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 13:01:56.452070 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 13:01:56.453984 systemd[1]: Stopped target swap.target - Swaps. Oct 30 13:01:56.455740 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 13:01:56.455871 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 13:01:56.458452 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:01:56.460589 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:01:56.462712 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 13:01:56.466755 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:01:56.468336 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 13:01:56.468452 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 13:01:56.471593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 13:01:56.471709 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 13:01:56.473950 systemd[1]: Stopped target paths.target - Path Units. Oct 30 13:01:56.475734 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 13:01:56.475848 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:01:56.478120 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 13:01:56.479834 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 13:01:56.481814 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 13:01:56.481917 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 13:01:56.484233 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 13:01:56.484318 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 13:01:56.486047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 13:01:56.486156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 13:01:56.488101 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 13:01:56.488216 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 13:01:56.490690 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 13:01:56.493237 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 13:01:56.494305 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 13:01:56.494430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:01:56.496698 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 13:01:56.496807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:01:56.499086 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 13:01:56.499192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 13:01:56.504945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 13:01:56.510909 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 13:01:56.518651 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 13:01:56.524048 ignition[1098]: INFO : Ignition 2.22.0 Oct 30 13:01:56.524048 ignition[1098]: INFO : Stage: umount Oct 30 13:01:56.526809 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 13:01:56.526809 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 13:01:56.526809 ignition[1098]: INFO : umount: umount passed Oct 30 13:01:56.526809 ignition[1098]: INFO : Ignition finished successfully Oct 30 13:01:56.527712 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 13:01:56.527818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 13:01:56.529137 systemd[1]: Stopped target network.target - Network. Oct 30 13:01:56.531006 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 13:01:56.531065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 13:01:56.532805 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 13:01:56.532854 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 13:01:56.534632 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 13:01:56.534679 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 13:01:56.536596 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 13:01:56.536641 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 13:01:56.538744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 13:01:56.540612 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 13:01:56.545283 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 13:01:56.545392 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 13:01:56.555664 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 13:01:56.555768 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 13:01:56.559234 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 13:01:56.559351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 13:01:56.562528 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 13:01:56.564027 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 13:01:56.564067 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:01:56.566196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 13:01:56.566252 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 13:01:56.568917 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 13:01:56.570048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 13:01:56.570114 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 13:01:56.572223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 13:01:56.572266 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:01:56.574166 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 13:01:56.574211 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 13:01:56.576140 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:01:56.591294 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 13:01:56.591432 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:01:56.594672 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 13:01:56.594735 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 13:01:56.596161 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 13:01:56.596193 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:01:56.598219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 13:01:56.598267 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 13:01:56.601095 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 13:01:56.601143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 13:01:56.603204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 13:01:56.603253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 13:01:56.611434 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 13:01:56.612708 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 13:01:56.612772 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:01:56.615129 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 13:01:56.615177 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:01:56.617340 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 30 13:01:56.617384 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:01:56.620014 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 13:01:56.620058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:01:56.622139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 13:01:56.622184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:01:56.625088 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 13:01:56.625192 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 13:01:56.626716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 13:01:56.626815 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 13:01:56.630680 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 13:01:56.632698 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 13:01:56.648126 systemd[1]: Switching root. Oct 30 13:01:56.684020 systemd-journald[345]: Journal stopped Oct 30 13:01:57.448203 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Oct 30 13:01:57.448251 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 13:01:57.448267 kernel: SELinux: policy capability open_perms=1 Oct 30 13:01:57.448278 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 13:01:57.448293 kernel: SELinux: policy capability always_check_network=0 Oct 30 13:01:57.448304 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 13:01:57.448315 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 13:01:57.448326 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 13:01:57.448336 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 13:01:57.448346 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 13:01:57.448359 systemd[1]: Successfully loaded SELinux policy in 62.523ms. Oct 30 13:01:57.448381 kernel: audit: type=1403 audit(1761829316.874:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 13:01:57.448392 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.211ms. Oct 30 13:01:57.448418 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 13:01:57.448429 systemd[1]: Detected virtualization kvm. Oct 30 13:01:57.448444 systemd[1]: Detected architecture arm64. Oct 30 13:01:57.448455 systemd[1]: Detected first boot. Oct 30 13:01:57.448477 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 13:01:57.448490 zram_generator::config[1144]: No configuration found. Oct 30 13:01:57.448503 kernel: NET: Registered PF_VSOCK protocol family Oct 30 13:01:57.448513 systemd[1]: Populated /etc with preset unit settings. Oct 30 13:01:57.448525 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 13:01:57.448536 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 13:01:57.448547 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 13:01:57.448563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 13:01:57.448574 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 13:01:57.448585 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 13:01:57.448597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 13:01:57.448609 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 13:01:57.448621 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 13:01:57.448634 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 13:01:57.448645 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 13:01:57.448656 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 13:01:57.448667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 13:01:57.448678 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 13:01:57.448690 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 13:01:57.448701 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 13:01:57.448729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 13:01:57.448740 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 30 13:01:57.448751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 13:01:57.448763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 13:01:57.448774 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 13:01:57.448784 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 13:01:57.448796 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 13:01:57.448807 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 13:01:57.448818 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 13:01:57.448829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 13:01:57.448842 systemd[1]: Reached target slices.target - Slice Units. Oct 30 13:01:57.448853 systemd[1]: Reached target swap.target - Swaps. Oct 30 13:01:57.448865 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 13:01:57.448891 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 13:01:57.448907 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 13:01:57.448919 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 13:01:57.448930 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 13:01:57.448940 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 13:01:57.448952 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 13:01:57.448962 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 13:01:57.448972 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 13:01:57.448985 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 13:01:57.448996 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 13:01:57.449006 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 13:01:57.449017 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 13:01:57.449028 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 13:01:57.449039 systemd[1]: Reached target machines.target - Containers. Oct 30 13:01:57.449050 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 13:01:57.449063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:01:57.449074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 13:01:57.449085 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 13:01:57.449096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:01:57.449107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:01:57.449118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:01:57.449130 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 13:01:57.449141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:01:57.449151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 13:01:57.449163 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 13:01:57.449174 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 13:01:57.449184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 13:01:57.449195 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 13:01:57.449207 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:01:57.449219 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 13:01:57.449229 kernel: ACPI: bus type drm_connector registered Oct 30 13:01:57.449240 kernel: fuse: init (API version 7.41) Oct 30 13:01:57.449250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 13:01:57.449261 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 13:01:57.449272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 13:01:57.449284 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 13:01:57.449295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 13:01:57.449333 systemd-journald[1221]: Collecting audit messages is disabled. Oct 30 13:01:57.449355 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 13:01:57.449367 systemd-journald[1221]: Journal started Oct 30 13:01:57.449390 systemd-journald[1221]: Runtime Journal (/run/log/journal/495d50383ecd48329d6c16c60ecbf8d0) is 6M, max 48.5M, 42.4M free. Oct 30 13:01:57.224224 systemd[1]: Queued start job for default target multi-user.target. Oct 30 13:01:57.245751 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 13:01:57.246185 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 13:01:57.451761 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 13:01:57.453906 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 13:01:57.454849 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 13:01:57.456017 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 13:01:57.457268 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 13:01:57.458589 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 13:01:57.460011 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 13:01:57.462932 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 13:01:57.464491 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 13:01:57.464650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 13:01:57.466214 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:01:57.466392 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:01:57.467938 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:01:57.468096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:01:57.469492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:01:57.469652 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:01:57.471240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 13:01:57.471423 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 13:01:57.472893 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:01:57.473048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:01:57.474487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 13:01:57.476391 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 13:01:57.478623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 13:01:57.480554 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 13:01:57.492985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 13:01:57.495785 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 13:01:57.497545 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 13:01:57.499852 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 13:01:57.501855 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 13:01:57.503070 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 13:01:57.503108 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 13:01:57.504950 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 13:01:57.506378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:01:57.511676 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 13:01:57.513875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 13:01:57.515096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:01:57.515983 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 13:01:57.517193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:01:57.521018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 13:01:57.521550 systemd-journald[1221]: Time spent on flushing to /var/log/journal/495d50383ecd48329d6c16c60ecbf8d0 is 13.175ms for 871 entries. Oct 30 13:01:57.521550 systemd-journald[1221]: System Journal (/var/log/journal/495d50383ecd48329d6c16c60ecbf8d0) is 8M, max 163.5M, 155.5M free. Oct 30 13:01:57.539119 systemd-journald[1221]: Received client request to flush runtime journal. Oct 30 13:01:57.527017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 13:01:57.531092 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 13:01:57.533553 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 13:01:57.534996 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 13:01:57.538060 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 13:01:57.542513 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 13:01:57.544476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 13:01:57.546902 kernel: loop1: detected capacity change from 0 to 100192 Oct 30 13:01:57.548146 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 13:01:57.553035 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 13:01:57.555369 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 30 13:01:57.555388 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Oct 30 13:01:57.565149 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 13:01:57.569914 kernel: loop2: detected capacity change from 0 to 200800 Oct 30 13:01:57.570049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 13:01:57.583925 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 13:01:57.599968 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 13:01:57.601913 kernel: loop3: detected capacity change from 0 to 119400 Oct 30 13:01:57.604080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 13:01:57.608021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 13:01:57.612283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 13:01:57.630910 kernel: loop4: detected capacity change from 0 to 100192 Oct 30 13:01:57.637915 kernel: loop5: detected capacity change from 0 to 200800 Oct 30 13:01:57.638367 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Oct 30 13:01:57.638636 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Oct 30 13:01:57.642071 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 13:01:57.646923 kernel: loop6: detected capacity change from 0 to 119400 Oct 30 13:01:57.650835 (sd-merge)[1285]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 13:01:57.653211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 13:01:57.653558 (sd-merge)[1285]: Merged extensions into '/usr'. Oct 30 13:01:57.658006 systemd[1]: Reload requested from client PID 1261 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 13:01:57.658019 systemd[1]: Reloading... Oct 30 13:01:57.703165 systemd-resolved[1281]: Positive Trust Anchors: Oct 30 13:01:57.703186 systemd-resolved[1281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 13:01:57.703189 systemd-resolved[1281]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 13:01:57.703220 systemd-resolved[1281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 13:01:57.710531 systemd-resolved[1281]: Defaulting to hostname 'linux'. Oct 30 13:01:57.724911 zram_generator::config[1326]: No configuration found. Oct 30 13:01:57.851328 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 13:01:57.851630 systemd[1]: Reloading finished in 193 ms. Oct 30 13:01:57.881528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 13:01:57.883278 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 13:01:57.886488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 13:01:57.906099 systemd[1]: Starting ensure-sysext.service... Oct 30 13:01:57.907944 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 13:01:57.917019 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Oct 30 13:01:57.917036 systemd[1]: Reloading... Oct 30 13:01:57.922477 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 13:01:57.922512 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 13:01:57.922745 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 13:01:57.922960 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 13:01:57.923567 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 13:01:57.923769 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 30 13:01:57.923820 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Oct 30 13:01:57.927660 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:01:57.927672 systemd-tmpfiles[1354]: Skipping /boot Oct 30 13:01:57.933418 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 13:01:57.933433 systemd-tmpfiles[1354]: Skipping /boot Oct 30 13:01:57.960992 zram_generator::config[1387]: No configuration found. Oct 30 13:01:58.087018 systemd[1]: Reloading finished in 169 ms. Oct 30 13:01:58.110525 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 13:01:58.127454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 13:01:58.134994 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:01:58.137154 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 13:01:58.147340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 13:01:58.151180 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 13:01:58.154027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 13:01:58.156679 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 13:01:58.161199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:01:58.169182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 13:01:58.173417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 13:01:58.175944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 13:01:58.177360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:01:58.177482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:01:58.184982 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 13:01:58.185176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 13:01:58.187043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 13:01:58.187294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 13:01:58.189510 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 13:01:58.191439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 13:01:58.191679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 13:01:58.197647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 13:01:58.198887 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 13:01:58.200172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 13:01:58.200215 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 13:01:58.200252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 13:01:58.200289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 13:01:58.200759 systemd[1]: Finished ensure-sysext.service. Oct 30 13:01:58.205561 systemd-udevd[1425]: Using default interface naming scheme 'v257'. Oct 30 13:01:58.214063 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 13:01:58.215993 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 13:01:58.218011 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 13:01:58.218195 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 13:01:58.219754 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 13:01:58.222736 augenrules[1457]: No rules Oct 30 13:01:58.223226 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 13:01:58.224547 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:01:58.224738 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:01:58.231281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 13:01:58.241325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 13:01:58.297195 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 13:01:58.300191 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 13:01:58.304518 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 30 13:01:58.344809 systemd-networkd[1470]: lo: Link UP Oct 30 13:01:58.344814 systemd-networkd[1470]: lo: Gained carrier Oct 30 13:01:58.346173 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 13:01:58.346782 systemd-networkd[1470]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:01:58.346792 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 13:01:58.350313 systemd-networkd[1470]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:01:58.350444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 13:01:58.350517 systemd-networkd[1470]: eth0: Link UP Oct 30 13:01:58.350712 systemd-networkd[1470]: eth0: Gained carrier Oct 30 13:01:58.350732 systemd-networkd[1470]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 13:01:58.352114 systemd[1]: Reached target network.target - Network. Oct 30 13:01:58.354820 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 13:01:58.357700 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 13:01:58.367948 systemd-networkd[1470]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 13:01:58.368091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 13:01:58.369050 systemd-timesyncd[1455]: Network configuration changed, trying to establish connection. Oct 30 13:01:57.890956 systemd-resolved[1281]: Clock change detected. Flushing caches. Oct 30 13:01:57.897566 systemd-journald[1221]: Time jumped backwards, rotating. Oct 30 13:01:57.890991 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 13:01:57.891036 systemd-timesyncd[1455]: Initial clock synchronization to Thu 2025-10-30 13:01:57.890904 UTC. Oct 30 13:01:57.897137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 13:01:57.903086 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 13:01:57.970964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 13:01:57.989029 ldconfig[1422]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 13:01:57.994979 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 13:01:57.997828 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 13:01:58.016967 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 13:01:58.019308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 13:01:58.021814 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 13:01:58.023166 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 13:01:58.024461 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 13:01:58.025938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 13:01:58.027171 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 13:01:58.028486 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 13:01:58.029805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 13:01:58.029839 systemd[1]: Reached target paths.target - Path Units. Oct 30 13:01:58.030842 systemd[1]: Reached target timers.target - Timer Units. Oct 30 13:01:58.032752 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 13:01:58.035124 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 13:01:58.037789 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 13:01:58.039316 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 13:01:58.040699 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 13:01:58.043840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 13:01:58.045263 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 13:01:58.047008 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 13:01:58.048189 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 13:01:58.049187 systemd[1]: Reached target basic.target - Basic System. Oct 30 13:01:58.050219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:01:58.050253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 13:01:58.051093 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 13:01:58.053025 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 13:01:58.054835 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 13:01:58.056955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 13:01:58.058830 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 13:01:58.059975 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 13:01:58.060845 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 13:01:58.063018 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 13:01:58.064916 jq[1534]: false Oct 30 13:01:58.066221 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 13:01:58.069186 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 13:01:58.075063 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 13:01:58.076180 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 13:01:58.076608 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 13:01:58.077145 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 13:01:58.078220 extend-filesystems[1535]: Found /dev/vda6 Oct 30 13:01:58.081199 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 13:01:58.082436 extend-filesystems[1535]: Found /dev/vda9 Oct 30 13:01:58.089326 extend-filesystems[1535]: Checking size of /dev/vda9 Oct 30 13:01:58.088906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 13:01:58.092011 jq[1553]: true Oct 30 13:01:58.091316 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 13:01:58.091473 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 13:01:58.091716 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 13:01:58.091907 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 13:01:58.093974 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 13:01:58.094152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 13:01:58.104742 extend-filesystems[1535]: Resized partition /dev/vda9 Oct 30 13:01:58.108895 extend-filesystems[1578]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 13:01:58.114286 jq[1566]: true Oct 30 13:01:58.117585 update_engine[1550]: I20251030 13:01:58.117326 1550 main.cc:92] Flatcar Update Engine starting Oct 30 13:01:58.122991 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 13:01:58.132704 tar[1565]: linux-arm64/LICENSE Oct 30 13:01:58.133784 tar[1565]: linux-arm64/helm Oct 30 13:01:58.140992 dbus-daemon[1532]: [system] SELinux support is enabled Oct 30 13:01:58.141198 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 13:01:58.144808 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 13:01:58.144860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 13:01:58.146811 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 13:01:58.146835 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 13:01:58.149736 systemd[1]: Started update-engine.service - Update Engine. Oct 30 13:01:58.150148 update_engine[1550]: I20251030 13:01:58.150095 1550 update_check_scheduler.cc:74] Next update check in 8m26s Oct 30 13:01:58.156041 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 13:01:58.156013 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 13:01:58.177524 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (Power Button) Oct 30 13:01:58.177823 extend-filesystems[1578]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 13:01:58.177823 extend-filesystems[1578]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 13:01:58.177823 extend-filesystems[1578]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 13:01:58.186898 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Oct 30 13:01:58.179307 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 13:01:58.180421 systemd-logind[1545]: New seat seat0. Oct 30 13:01:58.180885 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 13:01:58.187075 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 13:01:58.192010 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Oct 30 13:01:58.193573 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 13:01:58.196450 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 13:01:58.212646 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 13:01:58.286975 containerd[1567]: time="2025-10-30T13:01:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 13:01:58.288344 containerd[1567]: time="2025-10-30T13:01:58.288017235Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 13:01:58.302114 containerd[1567]: time="2025-10-30T13:01:58.302036795Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="47.64µs" Oct 30 13:01:58.302260 containerd[1567]: time="2025-10-30T13:01:58.302239555Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302305435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302447115Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302463835Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302485395Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302545035Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302557315Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302720235Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302734235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302744475Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302752635Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.302817555Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303542 containerd[1567]: time="2025-10-30T13:01:58.303014275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303798 containerd[1567]: time="2025-10-30T13:01:58.303040875Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 13:01:58.303798 containerd[1567]: time="2025-10-30T13:01:58.303050995Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 13:01:58.303798 containerd[1567]: time="2025-10-30T13:01:58.303084035Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 13:01:58.303798 containerd[1567]: time="2025-10-30T13:01:58.303275595Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 13:01:58.303798 containerd[1567]: time="2025-10-30T13:01:58.303332035Z" level=info msg="metadata content store policy set" policy=shared Oct 30 13:01:58.307157 containerd[1567]: time="2025-10-30T13:01:58.307058035Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 13:01:58.307288 containerd[1567]: time="2025-10-30T13:01:58.307270995Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 13:01:58.307459 containerd[1567]: time="2025-10-30T13:01:58.307440915Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 13:01:58.307627 containerd[1567]: time="2025-10-30T13:01:58.307607195Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 13:01:58.307775 containerd[1567]: time="2025-10-30T13:01:58.307720395Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 13:01:58.307939 containerd[1567]: time="2025-10-30T13:01:58.307872235Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 13:01:58.308105 containerd[1567]: time="2025-10-30T13:01:58.308086435Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 13:01:58.308280 containerd[1567]: time="2025-10-30T13:01:58.308263035Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 13:01:58.308344 containerd[1567]: time="2025-10-30T13:01:58.308330475Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 13:01:58.308444 containerd[1567]: time="2025-10-30T13:01:58.308429035Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 13:01:58.308605 containerd[1567]: time="2025-10-30T13:01:58.308549835Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 13:01:58.308754 containerd[1567]: time="2025-10-30T13:01:58.308692755Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 13:01:58.309079 containerd[1567]: time="2025-10-30T13:01:58.309053795Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 13:01:58.309581 containerd[1567]: time="2025-10-30T13:01:58.309431155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 13:01:58.309581 containerd[1567]: time="2025-10-30T13:01:58.309523355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 13:01:58.310007 containerd[1567]: time="2025-10-30T13:01:58.309978195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 13:01:58.310042 containerd[1567]: time="2025-10-30T13:01:58.310018195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 13:01:58.310042 containerd[1567]: time="2025-10-30T13:01:58.310033715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 13:01:58.310107 containerd[1567]: time="2025-10-30T13:01:58.310050435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 13:01:58.310107 containerd[1567]: time="2025-10-30T13:01:58.310066395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 13:01:58.310107 containerd[1567]: time="2025-10-30T13:01:58.310082915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 13:01:58.310107 containerd[1567]: time="2025-10-30T13:01:58.310094355Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 13:01:58.310184 containerd[1567]: time="2025-10-30T13:01:58.310108195Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 13:01:58.310463 containerd[1567]: time="2025-10-30T13:01:58.310372275Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 13:01:58.310463 containerd[1567]: time="2025-10-30T13:01:58.310399715Z" level=info msg="Start snapshots syncer" Oct 30 13:01:58.310463 containerd[1567]: time="2025-10-30T13:01:58.310426675Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 13:01:58.310761 containerd[1567]: time="2025-10-30T13:01:58.310716115Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 13:01:58.310863 containerd[1567]: time="2025-10-30T13:01:58.310777475Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 13:01:58.310884 containerd[1567]: time="2025-10-30T13:01:58.310856395Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 13:01:58.311226 containerd[1567]: time="2025-10-30T13:01:58.311192115Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 13:01:58.311266 containerd[1567]: time="2025-10-30T13:01:58.311234555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 13:01:58.311266 containerd[1567]: time="2025-10-30T13:01:58.311250595Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311265995Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311282955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311294355Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311309195Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311339395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 13:01:58.311365 containerd[1567]: time="2025-10-30T13:01:58.311354715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311369195Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311409995Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311424675Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311436595Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311449275Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311460395Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311483915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 13:01:58.311521 containerd[1567]: time="2025-10-30T13:01:58.311499595Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 13:01:58.311734 containerd[1567]: time="2025-10-30T13:01:58.311605675Z" level=info msg="runtime interface created" Oct 30 13:01:58.311734 containerd[1567]: time="2025-10-30T13:01:58.311615595Z" level=info msg="created NRI interface" Oct 30 13:01:58.311734 containerd[1567]: time="2025-10-30T13:01:58.311625395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 13:01:58.311734 containerd[1567]: time="2025-10-30T13:01:58.311640275Z" level=info msg="Connect containerd service" Oct 30 13:01:58.311734 containerd[1567]: time="2025-10-30T13:01:58.311674795Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 13:01:58.312489 containerd[1567]: time="2025-10-30T13:01:58.312457835Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380163595Z" level=info msg="Start subscribing containerd event" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380252355Z" level=info msg="Start recovering state" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380336275Z" level=info msg="Start event monitor" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380349035Z" level=info msg="Start cni network conf syncer for default" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380358275Z" level=info msg="Start streaming server" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380434115Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380441355Z" level=info msg="runtime interface starting up..." Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380446755Z" level=info msg="starting plugins..." Oct 30 13:01:58.380584 containerd[1567]: time="2025-10-30T13:01:58.380460475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 13:01:58.380849 containerd[1567]: time="2025-10-30T13:01:58.380634555Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 13:01:58.380849 containerd[1567]: time="2025-10-30T13:01:58.380698155Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 13:01:58.380849 containerd[1567]: time="2025-10-30T13:01:58.380756115Z" level=info msg="containerd successfully booted in 0.094294s" Oct 30 13:01:58.380895 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 13:01:58.467883 tar[1565]: linux-arm64/README.md Oct 30 13:01:58.483914 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 13:01:58.569769 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 13:01:58.589010 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 13:01:58.592743 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 13:01:58.613204 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 13:01:58.614035 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 13:01:58.616631 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 13:01:58.636006 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 13:01:58.640083 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 13:01:58.642283 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 30 13:01:58.643728 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 13:01:59.228113 systemd-networkd[1470]: eth0: Gained IPv6LL Oct 30 13:01:59.231989 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 13:01:59.233829 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 13:01:59.236528 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 13:01:59.238959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:01:59.249326 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 13:01:59.264432 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 13:01:59.264670 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 13:01:59.266356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 13:01:59.270211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 13:01:59.762732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:01:59.764403 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 13:01:59.766450 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:01:59.766548 systemd[1]: Startup finished in 1.175s (kernel) + 4.789s (initrd) + 3.436s (userspace) = 9.402s. Oct 30 13:02:00.065699 kubelet[1669]: E1030 13:02:00.065576 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:02:00.067734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:02:00.067873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:02:00.068188 systemd[1]: kubelet.service: Consumed 680ms CPU time, 247.8M memory peak. Oct 30 13:02:02.899366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 13:02:02.900479 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:41962.service - OpenSSH per-connection server daemon (10.0.0.1:41962). Oct 30 13:02:02.976570 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 41962 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:02.978227 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:02.984050 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 13:02:02.984960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 13:02:02.991783 systemd-logind[1545]: New session 1 of user core. Oct 30 13:02:03.012985 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 13:02:03.015439 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 13:02:03.032279 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 13:02:03.034603 systemd-logind[1545]: New session c1 of user core. Oct 30 13:02:03.143571 systemd[1687]: Queued start job for default target default.target. Oct 30 13:02:03.165879 systemd[1687]: Created slice app.slice - User Application Slice. Oct 30 13:02:03.165914 systemd[1687]: Reached target paths.target - Paths. Oct 30 13:02:03.165980 systemd[1687]: Reached target timers.target - Timers. Oct 30 13:02:03.167231 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 13:02:03.177226 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 13:02:03.177298 systemd[1687]: Reached target sockets.target - Sockets. Oct 30 13:02:03.177338 systemd[1687]: Reached target basic.target - Basic System. Oct 30 13:02:03.177365 systemd[1687]: Reached target default.target - Main User Target. Oct 30 13:02:03.177390 systemd[1687]: Startup finished in 136ms. Oct 30 13:02:03.177606 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 13:02:03.178911 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 13:02:03.188619 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:41978.service - OpenSSH per-connection server daemon (10.0.0.1:41978). Oct 30 13:02:03.253828 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 41978 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.255170 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.259245 systemd-logind[1545]: New session 2 of user core. Oct 30 13:02:03.270083 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 13:02:03.280599 sshd[1701]: Connection closed by 10.0.0.1 port 41978 Oct 30 13:02:03.281009 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:03.294001 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:41978.service: Deactivated successfully. Oct 30 13:02:03.296312 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 13:02:03.296999 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Oct 30 13:02:03.299028 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:41990.service - OpenSSH per-connection server daemon (10.0.0.1:41990). Oct 30 13:02:03.299605 systemd-logind[1545]: Removed session 2. Oct 30 13:02:03.354370 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 41990 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.355881 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.359668 systemd-logind[1545]: New session 3 of user core. Oct 30 13:02:03.367110 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 13:02:03.373626 sshd[1710]: Connection closed by 10.0.0.1 port 41990 Oct 30 13:02:03.373519 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:03.385784 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:41990.service: Deactivated successfully. Oct 30 13:02:03.387144 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 13:02:03.387754 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Oct 30 13:02:03.389812 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:42000.service - OpenSSH per-connection server daemon (10.0.0.1:42000). Oct 30 13:02:03.390253 systemd-logind[1545]: Removed session 3. Oct 30 13:02:03.438587 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 42000 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.440110 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.443720 systemd-logind[1545]: New session 4 of user core. Oct 30 13:02:03.451075 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 13:02:03.460860 sshd[1719]: Connection closed by 10.0.0.1 port 42000 Oct 30 13:02:03.460754 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:03.476125 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:42000.service: Deactivated successfully. Oct 30 13:02:03.478201 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 13:02:03.479309 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Oct 30 13:02:03.481335 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:42004.service - OpenSSH per-connection server daemon (10.0.0.1:42004). Oct 30 13:02:03.481988 systemd-logind[1545]: Removed session 4. Oct 30 13:02:03.534528 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 42004 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.535613 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.539714 systemd-logind[1545]: New session 5 of user core. Oct 30 13:02:03.552068 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 13:02:03.568181 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 13:02:03.568695 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:02:03.592744 sudo[1729]: pam_unix(sudo:session): session closed for user root Oct 30 13:02:03.594538 sshd[1728]: Connection closed by 10.0.0.1 port 42004 Oct 30 13:02:03.594912 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:03.604891 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:42004.service: Deactivated successfully. Oct 30 13:02:03.607454 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 13:02:03.608488 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Oct 30 13:02:03.610512 systemd-logind[1545]: Removed session 5. Oct 30 13:02:03.612281 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:42014.service - OpenSSH per-connection server daemon (10.0.0.1:42014). Oct 30 13:02:03.674155 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 42014 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.675450 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.680001 systemd-logind[1545]: New session 6 of user core. Oct 30 13:02:03.694087 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 13:02:03.705654 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 13:02:03.706197 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:02:03.751441 sudo[1740]: pam_unix(sudo:session): session closed for user root Oct 30 13:02:03.757792 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 13:02:03.758095 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:02:03.766223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 13:02:03.801822 augenrules[1762]: No rules Oct 30 13:02:03.802899 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 13:02:03.804008 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 13:02:03.804824 sudo[1739]: pam_unix(sudo:session): session closed for user root Oct 30 13:02:03.806288 sshd[1738]: Connection closed by 10.0.0.1 port 42014 Oct 30 13:02:03.806633 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:03.818030 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:42014.service: Deactivated successfully. Oct 30 13:02:03.819631 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 13:02:03.820432 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Oct 30 13:02:03.822835 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:42016.service - OpenSSH per-connection server daemon (10.0.0.1:42016). Oct 30 13:02:03.823502 systemd-logind[1545]: Removed session 6. Oct 30 13:02:03.877580 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 42016 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:02:03.878568 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:02:03.882992 systemd-logind[1545]: New session 7 of user core. Oct 30 13:02:03.893085 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 13:02:03.902527 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 13:02:03.902784 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 13:02:04.166899 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 13:02:04.186174 (dockerd)[1797]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 13:02:04.384116 dockerd[1797]: time="2025-10-30T13:02:04.384049355Z" level=info msg="Starting up" Oct 30 13:02:04.385461 dockerd[1797]: time="2025-10-30T13:02:04.385437795Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 13:02:04.396016 dockerd[1797]: time="2025-10-30T13:02:04.395973475Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 13:02:04.527180 dockerd[1797]: time="2025-10-30T13:02:04.527072355Z" level=info msg="Loading containers: start." Oct 30 13:02:04.537940 kernel: Initializing XFRM netlink socket Oct 30 13:02:04.723155 systemd-networkd[1470]: docker0: Link UP Oct 30 13:02:04.726701 dockerd[1797]: time="2025-10-30T13:02:04.726657315Z" level=info msg="Loading containers: done." Oct 30 13:02:04.737702 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck810760481-merged.mount: Deactivated successfully. Oct 30 13:02:04.738742 dockerd[1797]: time="2025-10-30T13:02:04.738686915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 13:02:04.738807 dockerd[1797]: time="2025-10-30T13:02:04.738774595Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 13:02:04.739111 dockerd[1797]: time="2025-10-30T13:02:04.738918275Z" level=info msg="Initializing buildkit" Oct 30 13:02:04.760056 dockerd[1797]: time="2025-10-30T13:02:04.760018515Z" level=info msg="Completed buildkit initialization" Oct 30 13:02:04.766004 dockerd[1797]: time="2025-10-30T13:02:04.765961915Z" level=info msg="Daemon has completed initialization" Oct 30 13:02:04.766172 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 13:02:04.766354 dockerd[1797]: time="2025-10-30T13:02:04.766116475Z" level=info msg="API listen on /run/docker.sock" Oct 30 13:02:05.159028 containerd[1567]: time="2025-10-30T13:02:05.158989275Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 30 13:02:05.687215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218679524.mount: Deactivated successfully. Oct 30 13:02:06.719024 containerd[1567]: time="2025-10-30T13:02:06.718961835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:06.719559 containerd[1567]: time="2025-10-30T13:02:06.719512435Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 30 13:02:06.720390 containerd[1567]: time="2025-10-30T13:02:06.720363435Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:06.724146 containerd[1567]: time="2025-10-30T13:02:06.723382995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:06.724146 containerd[1567]: time="2025-10-30T13:02:06.723916355Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.5648862s" Oct 30 13:02:06.724146 containerd[1567]: time="2025-10-30T13:02:06.723965475Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 30 13:02:06.724537 containerd[1567]: time="2025-10-30T13:02:06.724515755Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 30 13:02:07.971412 containerd[1567]: time="2025-10-30T13:02:07.971371275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:07.972389 containerd[1567]: time="2025-10-30T13:02:07.971884635Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 30 13:02:07.972869 containerd[1567]: time="2025-10-30T13:02:07.972838555Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:07.978128 containerd[1567]: time="2025-10-30T13:02:07.978088315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:07.978914 containerd[1567]: time="2025-10-30T13:02:07.978887675Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.2542866s" Oct 30 13:02:07.978965 containerd[1567]: time="2025-10-30T13:02:07.978919035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 30 13:02:07.979356 containerd[1567]: time="2025-10-30T13:02:07.979326995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 30 13:02:08.951994 containerd[1567]: time="2025-10-30T13:02:08.951948355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:08.952801 containerd[1567]: time="2025-10-30T13:02:08.952585795Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 30 13:02:08.953407 containerd[1567]: time="2025-10-30T13:02:08.953380355Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:08.956028 containerd[1567]: time="2025-10-30T13:02:08.955996955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:08.957143 containerd[1567]: time="2025-10-30T13:02:08.957109675Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 977.75344ms" Oct 30 13:02:08.957143 containerd[1567]: time="2025-10-30T13:02:08.957140835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 30 13:02:08.957875 containerd[1567]: time="2025-10-30T13:02:08.957850075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 30 13:02:10.038001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388801433.mount: Deactivated successfully. Oct 30 13:02:10.197627 containerd[1567]: time="2025-10-30T13:02:10.197566435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:10.198152 containerd[1567]: time="2025-10-30T13:02:10.198108195Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 30 13:02:10.198968 containerd[1567]: time="2025-10-30T13:02:10.198944195Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:10.200749 containerd[1567]: time="2025-10-30T13:02:10.200702155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:10.201212 containerd[1567]: time="2025-10-30T13:02:10.201187515Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.24330808s" Oct 30 13:02:10.201256 containerd[1567]: time="2025-10-30T13:02:10.201218075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 30 13:02:10.201803 containerd[1567]: time="2025-10-30T13:02:10.201777755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 30 13:02:10.318295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 13:02:10.319978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:10.446056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:10.450261 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:02:10.483275 kubelet[2099]: E1030 13:02:10.483236 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:02:10.486449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:02:10.486594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:02:10.489005 systemd[1]: kubelet.service: Consumed 140ms CPU time, 111.4M memory peak. Oct 30 13:02:10.873622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520136695.mount: Deactivated successfully. Oct 30 13:02:11.800971 containerd[1567]: time="2025-10-30T13:02:11.800912955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:11.801772 containerd[1567]: time="2025-10-30T13:02:11.801737995Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 30 13:02:11.802947 containerd[1567]: time="2025-10-30T13:02:11.802805835Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:11.806403 containerd[1567]: time="2025-10-30T13:02:11.806368115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:11.808051 containerd[1567]: time="2025-10-30T13:02:11.808020155Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.60616428s" Oct 30 13:02:11.808051 containerd[1567]: time="2025-10-30T13:02:11.808048875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 30 13:02:11.808503 containerd[1567]: time="2025-10-30T13:02:11.808472275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 30 13:02:12.253823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898451947.mount: Deactivated successfully. Oct 30 13:02:12.259811 containerd[1567]: time="2025-10-30T13:02:12.258962155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:12.259811 containerd[1567]: time="2025-10-30T13:02:12.259449315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 30 13:02:12.260371 containerd[1567]: time="2025-10-30T13:02:12.260353075Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:12.262054 containerd[1567]: time="2025-10-30T13:02:12.262031875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:12.262856 containerd[1567]: time="2025-10-30T13:02:12.262619155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 454.104ms" Oct 30 13:02:12.262856 containerd[1567]: time="2025-10-30T13:02:12.262650315Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 30 13:02:12.263094 containerd[1567]: time="2025-10-30T13:02:12.263056995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 30 13:02:15.456995 containerd[1567]: time="2025-10-30T13:02:15.456952915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:15.457581 containerd[1567]: time="2025-10-30T13:02:15.457552275Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 30 13:02:15.458532 containerd[1567]: time="2025-10-30T13:02:15.458500395Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:15.461726 containerd[1567]: time="2025-10-30T13:02:15.461686075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:15.462784 containerd[1567]: time="2025-10-30T13:02:15.462707355Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.19958388s" Oct 30 13:02:15.462784 containerd[1567]: time="2025-10-30T13:02:15.462749555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 30 13:02:20.696509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 13:02:20.699373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:20.814502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:20.817875 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 13:02:20.852337 kubelet[2236]: E1030 13:02:20.852287 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 13:02:20.854655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 13:02:20.854879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 13:02:20.855502 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106M memory peak. Oct 30 13:02:21.563494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:21.563762 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106M memory peak. Oct 30 13:02:21.565561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:21.587043 systemd[1]: Reload requested from client PID 2251 ('systemctl') (unit session-7.scope)... Oct 30 13:02:21.587063 systemd[1]: Reloading... Oct 30 13:02:21.661961 zram_generator::config[2298]: No configuration found. Oct 30 13:02:21.918156 systemd[1]: Reloading finished in 330 ms. Oct 30 13:02:21.965318 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 13:02:21.965388 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 13:02:21.965617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:21.965659 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95.1M memory peak. Oct 30 13:02:21.966903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:22.079010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:22.082725 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:02:22.117848 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:02:22.117848 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:02:22.118428 kubelet[2340]: I1030 13:02:22.118374 2340 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:02:22.771191 kubelet[2340]: I1030 13:02:22.771072 2340 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 13:02:22.771191 kubelet[2340]: I1030 13:02:22.771099 2340 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:02:22.772288 kubelet[2340]: I1030 13:02:22.772260 2340 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 13:02:22.772452 kubelet[2340]: I1030 13:02:22.772425 2340 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:02:22.772719 kubelet[2340]: I1030 13:02:22.772692 2340 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 13:02:22.863538 kubelet[2340]: E1030 13:02:22.863491 2340 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 13:02:22.864954 kubelet[2340]: I1030 13:02:22.864386 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:02:22.867722 kubelet[2340]: I1030 13:02:22.867700 2340 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:02:22.870121 kubelet[2340]: I1030 13:02:22.870104 2340 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 13:02:22.870365 kubelet[2340]: I1030 13:02:22.870339 2340 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:02:22.870515 kubelet[2340]: I1030 13:02:22.870366 2340 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:02:22.870600 kubelet[2340]: I1030 13:02:22.870517 2340 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:02:22.870600 kubelet[2340]: I1030 13:02:22.870527 2340 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 13:02:22.870639 kubelet[2340]: I1030 13:02:22.870624 2340 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 13:02:22.872997 kubelet[2340]: I1030 13:02:22.872979 2340 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:02:22.874123 kubelet[2340]: I1030 13:02:22.874091 2340 kubelet.go:475] "Attempting to sync node with API server" Oct 30 13:02:22.874123 kubelet[2340]: I1030 13:02:22.874116 2340 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:02:22.874956 kubelet[2340]: I1030 13:02:22.874577 2340 kubelet.go:387] "Adding apiserver pod source" Oct 30 13:02:22.874956 kubelet[2340]: I1030 13:02:22.874610 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:02:22.874956 kubelet[2340]: E1030 13:02:22.874638 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 13:02:22.875171 kubelet[2340]: E1030 13:02:22.875129 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 13:02:22.875940 kubelet[2340]: I1030 13:02:22.875558 2340 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:02:22.876948 kubelet[2340]: I1030 13:02:22.876298 2340 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 13:02:22.876948 kubelet[2340]: I1030 13:02:22.876389 2340 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 13:02:22.877437 kubelet[2340]: W1030 13:02:22.877365 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 13:02:22.879671 kubelet[2340]: I1030 13:02:22.879511 2340 server.go:1262] "Started kubelet" Oct 30 13:02:22.880879 kubelet[2340]: I1030 13:02:22.880855 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:02:22.881740 kubelet[2340]: I1030 13:02:22.881706 2340 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:02:22.881886 kubelet[2340]: I1030 13:02:22.881849 2340 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:02:22.881942 kubelet[2340]: I1030 13:02:22.881917 2340 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 13:02:22.882213 kubelet[2340]: I1030 13:02:22.882196 2340 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:02:22.882487 kubelet[2340]: I1030 13:02:22.882469 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:02:22.882649 kubelet[2340]: I1030 13:02:22.882636 2340 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 13:02:22.882781 kubelet[2340]: E1030 13:02:22.882762 2340 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 13:02:22.883103 kubelet[2340]: I1030 13:02:22.882974 2340 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 13:02:22.883103 kubelet[2340]: I1030 13:02:22.883040 2340 reconciler.go:29] "Reconciler: start to sync state" Oct 30 13:02:22.886526 kubelet[2340]: E1030 13:02:22.885911 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 13:02:22.886526 kubelet[2340]: E1030 13:02:22.886000 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Oct 30 13:02:22.886526 kubelet[2340]: I1030 13:02:22.886241 2340 server.go:310] "Adding debug handlers to kubelet server" Oct 30 13:02:22.888644 kubelet[2340]: I1030 13:02:22.888620 2340 factory.go:223] Registration of the systemd container factory successfully Oct 30 13:02:22.888793 kubelet[2340]: I1030 13:02:22.888775 2340 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:02:22.888885 kubelet[2340]: E1030 13:02:22.887495 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18734673e931049b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 13:02:22.879483035 +0000 UTC m=+0.793288281,LastTimestamp:2025-10-30 13:02:22.879483035 +0000 UTC m=+0.793288281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 13:02:22.890214 kubelet[2340]: I1030 13:02:22.890191 2340 factory.go:223] Registration of the containerd container factory successfully Oct 30 13:02:22.890697 kubelet[2340]: E1030 13:02:22.890677 2340 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:02:22.900323 kubelet[2340]: I1030 13:02:22.900183 2340 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 13:02:22.901181 kubelet[2340]: I1030 13:02:22.901149 2340 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 13:02:22.901181 kubelet[2340]: I1030 13:02:22.901171 2340 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 13:02:22.901262 kubelet[2340]: I1030 13:02:22.901199 2340 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 13:02:22.901262 kubelet[2340]: E1030 13:02:22.901241 2340 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:02:22.904676 kubelet[2340]: E1030 13:02:22.904645 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 13:02:22.905276 kubelet[2340]: I1030 13:02:22.905055 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:02:22.905276 kubelet[2340]: I1030 13:02:22.905071 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:02:22.905276 kubelet[2340]: I1030 13:02:22.905090 2340 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:02:22.906876 kubelet[2340]: I1030 13:02:22.906857 2340 policy_none.go:49] "None policy: Start" Oct 30 13:02:22.906977 kubelet[2340]: I1030 13:02:22.906966 2340 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 13:02:22.907031 kubelet[2340]: I1030 13:02:22.907019 2340 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 13:02:22.908315 kubelet[2340]: I1030 13:02:22.908298 2340 policy_none.go:47] "Start" Oct 30 13:02:22.912323 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 13:02:22.926373 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 13:02:22.929272 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 13:02:22.936847 kubelet[2340]: E1030 13:02:22.936821 2340 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 13:02:22.937088 kubelet[2340]: I1030 13:02:22.937060 2340 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:02:22.937126 kubelet[2340]: I1030 13:02:22.937077 2340 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:02:22.937396 kubelet[2340]: I1030 13:02:22.937375 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:02:22.938821 kubelet[2340]: E1030 13:02:22.938792 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:02:22.938874 kubelet[2340]: E1030 13:02:22.938835 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 13:02:23.010313 systemd[1]: Created slice kubepods-burstable-pod39752a95659548257133debf853154b8.slice - libcontainer container kubepods-burstable-pod39752a95659548257133debf853154b8.slice. Oct 30 13:02:23.023677 kubelet[2340]: E1030 13:02:23.023582 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:23.026598 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 30 13:02:23.028367 kubelet[2340]: E1030 13:02:23.028125 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:23.030122 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 30 13:02:23.033771 kubelet[2340]: E1030 13:02:23.033594 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:23.039957 kubelet[2340]: I1030 13:02:23.039936 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:02:23.040370 kubelet[2340]: E1030 13:02:23.040336 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 30 13:02:23.083676 kubelet[2340]: I1030 13:02:23.083637 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:23.083676 kubelet[2340]: I1030 13:02:23.083671 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:23.083742 kubelet[2340]: I1030 13:02:23.083689 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:23.083742 kubelet[2340]: I1030 13:02:23.083704 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:23.083742 kubelet[2340]: I1030 13:02:23.083731 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:23.083816 kubelet[2340]: I1030 13:02:23.083745 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:23.083816 kubelet[2340]: I1030 13:02:23.083761 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:23.083816 kubelet[2340]: I1030 13:02:23.083774 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:23.083816 kubelet[2340]: I1030 13:02:23.083787 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:23.087138 kubelet[2340]: E1030 13:02:23.087101 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Oct 30 13:02:23.242591 kubelet[2340]: I1030 13:02:23.242515 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:02:23.243037 kubelet[2340]: E1030 13:02:23.243000 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 30 13:02:23.326208 kubelet[2340]: E1030 13:02:23.326118 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:23.327364 containerd[1567]: time="2025-10-30T13:02:23.327317755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39752a95659548257133debf853154b8,Namespace:kube-system,Attempt:0,}" Oct 30 13:02:23.330967 kubelet[2340]: E1030 13:02:23.330942 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:23.331364 containerd[1567]: time="2025-10-30T13:02:23.331333235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 30 13:02:23.335910 kubelet[2340]: E1030 13:02:23.335886 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:23.336265 containerd[1567]: time="2025-10-30T13:02:23.336240155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 30 13:02:23.488015 kubelet[2340]: E1030 13:02:23.487973 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Oct 30 13:02:23.644740 kubelet[2340]: I1030 13:02:23.644639 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:02:23.645248 kubelet[2340]: E1030 13:02:23.645220 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 30 13:02:23.886461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951072485.mount: Deactivated successfully. Oct 30 13:02:23.890211 kubelet[2340]: E1030 13:02:23.890169 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 13:02:23.891359 containerd[1567]: time="2025-10-30T13:02:23.891313915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:02:23.892623 containerd[1567]: time="2025-10-30T13:02:23.892586875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 30 13:02:23.895404 containerd[1567]: time="2025-10-30T13:02:23.895319835Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:02:23.896126 containerd[1567]: time="2025-10-30T13:02:23.896103315Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:02:23.897640 containerd[1567]: time="2025-10-30T13:02:23.897563955Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 13:02:23.898253 containerd[1567]: time="2025-10-30T13:02:23.898210715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:02:23.899374 containerd[1567]: time="2025-10-30T13:02:23.899343235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 13:02:23.899969 containerd[1567]: time="2025-10-30T13:02:23.899945275Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 13:02:23.901437 containerd[1567]: time="2025-10-30T13:02:23.901349155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 568.18744ms" Oct 30 13:02:23.904520 containerd[1567]: time="2025-10-30T13:02:23.904478875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 574.67928ms" Oct 30 13:02:23.905226 containerd[1567]: time="2025-10-30T13:02:23.905178475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 567.1754ms" Oct 30 13:02:23.918714 containerd[1567]: time="2025-10-30T13:02:23.918627835Z" level=info msg="connecting to shim be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209" address="unix:///run/containerd/s/d0319f9aa881f9d38ce69ee5ff2ef299b1ff15c050a145769ee6ded3497cdd38" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:23.929506 containerd[1567]: time="2025-10-30T13:02:23.929449915Z" level=info msg="connecting to shim 8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de" address="unix:///run/containerd/s/aac61c8612128bcbab7e037f8210ed096b3e76e4c499f752edd1496f62c8b4ef" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:23.932974 containerd[1567]: time="2025-10-30T13:02:23.932933115Z" level=info msg="connecting to shim 4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446" address="unix:///run/containerd/s/56aed11b8cad46f720af83fd00c9f122d4f8fe71024244414daab4f713281b02" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:23.949118 systemd[1]: Started cri-containerd-be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209.scope - libcontainer container be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209. Oct 30 13:02:23.953255 systemd[1]: Started cri-containerd-4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446.scope - libcontainer container 4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446. Oct 30 13:02:23.959628 systemd[1]: Started cri-containerd-8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de.scope - libcontainer container 8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de. Oct 30 13:02:23.992952 containerd[1567]: time="2025-10-30T13:02:23.992882195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446\"" Oct 30 13:02:23.994196 kubelet[2340]: E1030 13:02:23.994156 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:23.996645 containerd[1567]: time="2025-10-30T13:02:23.996601675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209\"" Oct 30 13:02:23.997217 kubelet[2340]: E1030 13:02:23.997187 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:23.999526 containerd[1567]: time="2025-10-30T13:02:23.999488995Z" level=info msg="CreateContainer within sandbox \"4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 13:02:23.999905 containerd[1567]: time="2025-10-30T13:02:23.999870195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:39752a95659548257133debf853154b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de\"" Oct 30 13:02:24.000682 kubelet[2340]: E1030 13:02:24.000662 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:24.000793 containerd[1567]: time="2025-10-30T13:02:24.000655355Z" level=info msg="CreateContainer within sandbox \"be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 13:02:24.004006 containerd[1567]: time="2025-10-30T13:02:24.003557595Z" level=info msg="CreateContainer within sandbox \"8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 13:02:24.008873 containerd[1567]: time="2025-10-30T13:02:24.008839435Z" level=info msg="Container 05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:24.011138 containerd[1567]: time="2025-10-30T13:02:24.010986315Z" level=info msg="Container c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:24.012775 containerd[1567]: time="2025-10-30T13:02:24.012743515Z" level=info msg="Container 8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:24.017125 containerd[1567]: time="2025-10-30T13:02:24.017078355Z" level=info msg="CreateContainer within sandbox \"4f218c9771a009afb44c0457223d9452cecf2a7ca5d1937f290cf80f6a9f3446\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb\"" Oct 30 13:02:24.017980 containerd[1567]: time="2025-10-30T13:02:24.017952435Z" level=info msg="StartContainer for \"05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb\"" Oct 30 13:02:24.019257 containerd[1567]: time="2025-10-30T13:02:24.019227195Z" level=info msg="connecting to shim 05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb" address="unix:///run/containerd/s/56aed11b8cad46f720af83fd00c9f122d4f8fe71024244414daab4f713281b02" protocol=ttrpc version=3 Oct 30 13:02:24.021366 containerd[1567]: time="2025-10-30T13:02:24.021285195Z" level=info msg="CreateContainer within sandbox \"8bdc68c19a89be90799024afee53312f4aa2b1137fea00ca4ef3beff38e6c1de\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0\"" Oct 30 13:02:24.021425 containerd[1567]: time="2025-10-30T13:02:24.021405595Z" level=info msg="CreateContainer within sandbox \"be894c7fd0fb9c4378452a82662e292bb2f18401ae23674944bb628b6376b209\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a\"" Oct 30 13:02:24.022299 containerd[1567]: time="2025-10-30T13:02:24.022273915Z" level=info msg="StartContainer for \"c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a\"" Oct 30 13:02:24.023288 containerd[1567]: time="2025-10-30T13:02:24.023264955Z" level=info msg="connecting to shim c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a" address="unix:///run/containerd/s/d0319f9aa881f9d38ce69ee5ff2ef299b1ff15c050a145769ee6ded3497cdd38" protocol=ttrpc version=3 Oct 30 13:02:24.024945 containerd[1567]: time="2025-10-30T13:02:24.023892955Z" level=info msg="StartContainer for \"8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0\"" Oct 30 13:02:24.024945 containerd[1567]: time="2025-10-30T13:02:24.024829115Z" level=info msg="connecting to shim 8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0" address="unix:///run/containerd/s/aac61c8612128bcbab7e037f8210ed096b3e76e4c499f752edd1496f62c8b4ef" protocol=ttrpc version=3 Oct 30 13:02:24.041092 systemd[1]: Started cri-containerd-05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb.scope - libcontainer container 05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb. Oct 30 13:02:24.045574 systemd[1]: Started cri-containerd-8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0.scope - libcontainer container 8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0. Oct 30 13:02:24.047026 systemd[1]: Started cri-containerd-c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a.scope - libcontainer container c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a. Oct 30 13:02:24.094980 containerd[1567]: time="2025-10-30T13:02:24.092571435Z" level=info msg="StartContainer for \"c8818ae90cdc329bd7f0ad7e4995ca08a48654db80959292560c773f82d4975a\" returns successfully" Oct 30 13:02:24.095177 containerd[1567]: time="2025-10-30T13:02:24.095149915Z" level=info msg="StartContainer for \"05fd1eadd4ac5e1beab9942a929bf55a5c6a824c1e0fd3bf7f21e249f5265cdb\" returns successfully" Oct 30 13:02:24.101253 containerd[1567]: time="2025-10-30T13:02:24.101132355Z" level=info msg="StartContainer for \"8abf4f23b8a0eb8fb152dcf70758bbbd2cbd92020ca933941df27fd3a25059f0\" returns successfully" Oct 30 13:02:24.122740 kubelet[2340]: E1030 13:02:24.122703 2340 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 13:02:24.447211 kubelet[2340]: I1030 13:02:24.447175 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:02:24.915722 kubelet[2340]: E1030 13:02:24.915634 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:24.915942 kubelet[2340]: E1030 13:02:24.915908 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:24.917210 kubelet[2340]: E1030 13:02:24.917191 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:24.917322 kubelet[2340]: E1030 13:02:24.917307 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:24.919938 kubelet[2340]: E1030 13:02:24.919493 2340 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 13:02:24.919938 kubelet[2340]: E1030 13:02:24.919599 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:25.692272 kubelet[2340]: E1030 13:02:25.692236 2340 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 13:02:25.747203 kubelet[2340]: I1030 13:02:25.747082 2340 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:02:25.747203 kubelet[2340]: E1030 13:02:25.747116 2340 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 13:02:25.783991 kubelet[2340]: I1030 13:02:25.783945 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:25.789088 kubelet[2340]: E1030 13:02:25.789051 2340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:25.789088 kubelet[2340]: I1030 13:02:25.789079 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:25.790974 kubelet[2340]: E1030 13:02:25.790831 2340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:25.790974 kubelet[2340]: I1030 13:02:25.790856 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:25.792719 kubelet[2340]: E1030 13:02:25.792685 2340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:25.875843 kubelet[2340]: I1030 13:02:25.875802 2340 apiserver.go:52] "Watching apiserver" Oct 30 13:02:25.883145 kubelet[2340]: I1030 13:02:25.883098 2340 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 13:02:25.920995 kubelet[2340]: I1030 13:02:25.920957 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:25.921465 kubelet[2340]: I1030 13:02:25.921326 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:25.923278 kubelet[2340]: E1030 13:02:25.923224 2340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:25.923482 kubelet[2340]: E1030 13:02:25.923450 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:25.923992 kubelet[2340]: E1030 13:02:25.923970 2340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:25.924097 kubelet[2340]: E1030 13:02:25.924081 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:27.383132 kubelet[2340]: I1030 13:02:27.383095 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:27.388094 kubelet[2340]: E1030 13:02:27.388073 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:27.520777 systemd[1]: Reload requested from client PID 2628 ('systemctl') (unit session-7.scope)... Oct 30 13:02:27.520793 systemd[1]: Reloading... Oct 30 13:02:27.569386 kubelet[2340]: I1030 13:02:27.569352 2340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:27.574522 kubelet[2340]: E1030 13:02:27.574501 2340 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:27.588001 zram_generator::config[2673]: No configuration found. Oct 30 13:02:27.755805 systemd[1]: Reloading finished in 234 ms. Oct 30 13:02:27.782333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:27.782464 kubelet[2340]: I1030 13:02:27.782239 2340 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:02:27.801670 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 13:02:27.801899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:27.801973 systemd[1]: kubelet.service: Consumed 1.050s CPU time, 121.8M memory peak. Oct 30 13:02:27.803521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 13:02:27.972011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 13:02:27.989193 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 13:02:28.023362 kubelet[2714]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 13:02:28.023362 kubelet[2714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 13:02:28.023624 kubelet[2714]: I1030 13:02:28.023355 2714 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 13:02:28.030612 kubelet[2714]: I1030 13:02:28.030571 2714 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 13:02:28.030612 kubelet[2714]: I1030 13:02:28.030596 2714 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 13:02:28.030707 kubelet[2714]: I1030 13:02:28.030623 2714 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 13:02:28.030707 kubelet[2714]: I1030 13:02:28.030630 2714 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 13:02:28.030825 kubelet[2714]: I1030 13:02:28.030795 2714 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 13:02:28.031914 kubelet[2714]: I1030 13:02:28.031888 2714 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 13:02:28.033919 kubelet[2714]: I1030 13:02:28.033891 2714 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 13:02:28.036514 kubelet[2714]: I1030 13:02:28.036496 2714 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 13:02:28.038849 kubelet[2714]: I1030 13:02:28.038822 2714 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 13:02:28.039067 kubelet[2714]: I1030 13:02:28.039037 2714 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 13:02:28.039186 kubelet[2714]: I1030 13:02:28.039059 2714 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 13:02:28.039252 kubelet[2714]: I1030 13:02:28.039189 2714 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 13:02:28.039252 kubelet[2714]: I1030 13:02:28.039198 2714 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 13:02:28.039252 kubelet[2714]: I1030 13:02:28.039222 2714 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 13:02:28.040020 kubelet[2714]: I1030 13:02:28.040005 2714 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:02:28.040146 kubelet[2714]: I1030 13:02:28.040136 2714 kubelet.go:475] "Attempting to sync node with API server" Oct 30 13:02:28.040175 kubelet[2714]: I1030 13:02:28.040148 2714 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 13:02:28.040175 kubelet[2714]: I1030 13:02:28.040167 2714 kubelet.go:387] "Adding apiserver pod source" Oct 30 13:02:28.040225 kubelet[2714]: I1030 13:02:28.040176 2714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 13:02:28.040944 kubelet[2714]: I1030 13:02:28.040915 2714 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 13:02:28.041894 kubelet[2714]: I1030 13:02:28.041862 2714 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 13:02:28.041950 kubelet[2714]: I1030 13:02:28.041907 2714 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 13:02:28.048587 kubelet[2714]: I1030 13:02:28.048572 2714 server.go:1262] "Started kubelet" Oct 30 13:02:28.049230 kubelet[2714]: I1030 13:02:28.049213 2714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 13:02:28.050933 kubelet[2714]: I1030 13:02:28.049471 2714 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 13:02:28.050933 kubelet[2714]: I1030 13:02:28.050178 2714 server.go:310] "Adding debug handlers to kubelet server" Oct 30 13:02:28.052632 kubelet[2714]: I1030 13:02:28.052589 2714 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 13:02:28.052696 kubelet[2714]: I1030 13:02:28.052644 2714 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 13:02:28.053951 kubelet[2714]: I1030 13:02:28.052766 2714 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 13:02:28.053951 kubelet[2714]: I1030 13:02:28.052977 2714 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 13:02:28.054373 kubelet[2714]: I1030 13:02:28.054352 2714 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 13:02:28.054517 kubelet[2714]: I1030 13:02:28.054446 2714 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 13:02:28.055687 kubelet[2714]: I1030 13:02:28.054550 2714 reconciler.go:29] "Reconciler: start to sync state" Oct 30 13:02:28.061123 kubelet[2714]: I1030 13:02:28.060977 2714 factory.go:223] Registration of the systemd container factory successfully Oct 30 13:02:28.061123 kubelet[2714]: I1030 13:02:28.061060 2714 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 13:02:28.062866 kubelet[2714]: I1030 13:02:28.062837 2714 factory.go:223] Registration of the containerd container factory successfully Oct 30 13:02:28.063632 kubelet[2714]: E1030 13:02:28.063610 2714 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 13:02:28.069374 kubelet[2714]: I1030 13:02:28.069259 2714 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 13:02:28.070144 kubelet[2714]: I1030 13:02:28.070128 2714 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 13:02:28.070204 kubelet[2714]: I1030 13:02:28.070195 2714 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 13:02:28.070260 kubelet[2714]: I1030 13:02:28.070251 2714 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 13:02:28.070359 kubelet[2714]: E1030 13:02:28.070339 2714 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 13:02:28.098256 kubelet[2714]: I1030 13:02:28.098236 2714 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 13:02:28.098945 kubelet[2714]: I1030 13:02:28.098520 2714 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 13:02:28.099010 kubelet[2714]: I1030 13:02:28.098966 2714 state_mem.go:36] "Initialized new in-memory state store" Oct 30 13:02:28.099104 kubelet[2714]: I1030 13:02:28.099085 2714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 13:02:28.099126 kubelet[2714]: I1030 13:02:28.099104 2714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 13:02:28.099126 kubelet[2714]: I1030 13:02:28.099121 2714 policy_none.go:49] "None policy: Start" Oct 30 13:02:28.099166 kubelet[2714]: I1030 13:02:28.099128 2714 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 13:02:28.099166 kubelet[2714]: I1030 13:02:28.099139 2714 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 13:02:28.099239 kubelet[2714]: I1030 13:02:28.099228 2714 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 30 13:02:28.099262 kubelet[2714]: I1030 13:02:28.099240 2714 policy_none.go:47] "Start" Oct 30 13:02:28.103963 kubelet[2714]: E1030 13:02:28.103555 2714 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 13:02:28.104257 kubelet[2714]: I1030 13:02:28.104237 2714 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 13:02:28.104292 kubelet[2714]: I1030 13:02:28.104269 2714 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 13:02:28.104648 kubelet[2714]: I1030 13:02:28.104601 2714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 13:02:28.106199 kubelet[2714]: E1030 13:02:28.106179 2714 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 13:02:28.172361 kubelet[2714]: I1030 13:02:28.172303 2714 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:28.172445 kubelet[2714]: I1030 13:02:28.172411 2714 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:28.173483 kubelet[2714]: I1030 13:02:28.173446 2714 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.178421 kubelet[2714]: E1030 13:02:28.178393 2714 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.179220 kubelet[2714]: E1030 13:02:28.179200 2714 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:28.213849 kubelet[2714]: I1030 13:02:28.213829 2714 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 13:02:28.219780 kubelet[2714]: I1030 13:02:28.219756 2714 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 13:02:28.219867 kubelet[2714]: I1030 13:02:28.219817 2714 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 13:02:28.356160 kubelet[2714]: I1030 13:02:28.356036 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:28.356404 kubelet[2714]: I1030 13:02:28.356356 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:28.356559 kubelet[2714]: I1030 13:02:28.356531 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.356603 kubelet[2714]: I1030 13:02:28.356575 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.356649 kubelet[2714]: I1030 13:02:28.356610 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 13:02:28.356649 kubelet[2714]: I1030 13:02:28.356635 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/39752a95659548257133debf853154b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"39752a95659548257133debf853154b8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:28.356712 kubelet[2714]: I1030 13:02:28.356662 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.356712 kubelet[2714]: I1030 13:02:28.356684 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.356769 kubelet[2714]: I1030 13:02:28.356714 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 13:02:28.478323 kubelet[2714]: E1030 13:02:28.478189 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:28.479115 kubelet[2714]: E1030 13:02:28.478980 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:28.480062 kubelet[2714]: E1030 13:02:28.480003 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:29.042940 kubelet[2714]: I1030 13:02:29.042808 2714 apiserver.go:52] "Watching apiserver" Oct 30 13:02:29.054715 kubelet[2714]: I1030 13:02:29.054677 2714 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 13:02:29.084955 kubelet[2714]: E1030 13:02:29.084765 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:29.084955 kubelet[2714]: I1030 13:02:29.084834 2714 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:29.087196 kubelet[2714]: E1030 13:02:29.087125 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:29.090296 kubelet[2714]: E1030 13:02:29.090269 2714 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 13:02:29.090433 kubelet[2714]: E1030 13:02:29.090415 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:29.106819 kubelet[2714]: I1030 13:02:29.106771 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.106759315 podStartE2EDuration="1.106759315s" podCreationTimestamp="2025-10-30 13:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:02:29.106384795 +0000 UTC m=+1.114321481" watchObservedRunningTime="2025-10-30 13:02:29.106759315 +0000 UTC m=+1.114695921" Oct 30 13:02:29.119592 kubelet[2714]: I1030 13:02:29.119552 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.119541275 podStartE2EDuration="2.119541275s" podCreationTimestamp="2025-10-30 13:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:02:29.113000475 +0000 UTC m=+1.120937081" watchObservedRunningTime="2025-10-30 13:02:29.119541275 +0000 UTC m=+1.127477841" Oct 30 13:02:30.085621 kubelet[2714]: E1030 13:02:30.085572 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:30.086503 kubelet[2714]: E1030 13:02:30.085729 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:31.087058 kubelet[2714]: E1030 13:02:31.086989 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:32.863472 kubelet[2714]: E1030 13:02:32.863416 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:34.182746 kubelet[2714]: I1030 13:02:34.182685 2714 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 13:02:34.183084 containerd[1567]: time="2025-10-30T13:02:34.183007000Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 13:02:34.183255 kubelet[2714]: I1030 13:02:34.183157 2714 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 13:02:35.019179 kubelet[2714]: I1030 13:02:35.019123 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.019105978 podStartE2EDuration="8.019105978s" podCreationTimestamp="2025-10-30 13:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:02:29.119516515 +0000 UTC m=+1.127453081" watchObservedRunningTime="2025-10-30 13:02:35.019105978 +0000 UTC m=+7.027042584" Oct 30 13:02:35.032663 systemd[1]: Created slice kubepods-besteffort-pod1231f10a_e8c1_4a49_825b_349d62d25c41.slice - libcontainer container kubepods-besteffort-pod1231f10a_e8c1_4a49_825b_349d62d25c41.slice. Oct 30 13:02:35.208531 kubelet[2714]: I1030 13:02:35.208500 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkfj\" (UniqueName: \"kubernetes.io/projected/1231f10a-e8c1-4a49-825b-349d62d25c41-kube-api-access-xnkfj\") pod \"kube-proxy-vkjb4\" (UID: \"1231f10a-e8c1-4a49-825b-349d62d25c41\") " pod="kube-system/kube-proxy-vkjb4" Oct 30 13:02:35.208531 kubelet[2714]: I1030 13:02:35.208536 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1231f10a-e8c1-4a49-825b-349d62d25c41-xtables-lock\") pod \"kube-proxy-vkjb4\" (UID: \"1231f10a-e8c1-4a49-825b-349d62d25c41\") " pod="kube-system/kube-proxy-vkjb4" Oct 30 13:02:35.208866 kubelet[2714]: I1030 13:02:35.208558 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1231f10a-e8c1-4a49-825b-349d62d25c41-kube-proxy\") pod \"kube-proxy-vkjb4\" (UID: \"1231f10a-e8c1-4a49-825b-349d62d25c41\") " pod="kube-system/kube-proxy-vkjb4" Oct 30 13:02:35.208866 kubelet[2714]: I1030 13:02:35.208573 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1231f10a-e8c1-4a49-825b-349d62d25c41-lib-modules\") pod \"kube-proxy-vkjb4\" (UID: \"1231f10a-e8c1-4a49-825b-349d62d25c41\") " pod="kube-system/kube-proxy-vkjb4" Oct 30 13:02:35.344883 kubelet[2714]: E1030 13:02:35.344797 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:35.345582 containerd[1567]: time="2025-10-30T13:02:35.345358227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vkjb4,Uid:1231f10a-e8c1-4a49-825b-349d62d25c41,Namespace:kube-system,Attempt:0,}" Oct 30 13:02:35.364697 containerd[1567]: time="2025-10-30T13:02:35.364384926Z" level=info msg="connecting to shim 04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62" address="unix:///run/containerd/s/86951b6e71b707ba63b10d163f8f191bfd6372bc9b86e53cd66495482d110f1a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:35.391099 systemd[1]: Started cri-containerd-04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62.scope - libcontainer container 04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62. Oct 30 13:02:35.426689 containerd[1567]: time="2025-10-30T13:02:35.426638635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vkjb4,Uid:1231f10a-e8c1-4a49-825b-349d62d25c41,Namespace:kube-system,Attempt:0,} returns sandbox id \"04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62\"" Oct 30 13:02:35.427859 kubelet[2714]: E1030 13:02:35.427823 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:35.436729 containerd[1567]: time="2025-10-30T13:02:35.436682853Z" level=info msg="CreateContainer within sandbox \"04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 13:02:35.442799 systemd[1]: Created slice kubepods-besteffort-podd3c5f4d6_a3e1_4055_9949_ec49ecceb1f1.slice - libcontainer container kubepods-besteffort-podd3c5f4d6_a3e1_4055_9949_ec49ecceb1f1.slice. Oct 30 13:02:35.457978 containerd[1567]: time="2025-10-30T13:02:35.457075897Z" level=info msg="Container 1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:35.471085 containerd[1567]: time="2025-10-30T13:02:35.471032345Z" level=info msg="CreateContainer within sandbox \"04c51ac256e4f386f9479af386b7dfb309ec3ef186b4a424cd03055f1bae8d62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9\"" Oct 30 13:02:35.473315 containerd[1567]: time="2025-10-30T13:02:35.473093582Z" level=info msg="StartContainer for \"1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9\"" Oct 30 13:02:35.474594 containerd[1567]: time="2025-10-30T13:02:35.474570168Z" level=info msg="connecting to shim 1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9" address="unix:///run/containerd/s/86951b6e71b707ba63b10d163f8f191bfd6372bc9b86e53cd66495482d110f1a" protocol=ttrpc version=3 Oct 30 13:02:35.496080 systemd[1]: Started cri-containerd-1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9.scope - libcontainer container 1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9. Oct 30 13:02:35.509891 kubelet[2714]: I1030 13:02:35.509861 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-l9rj2\" (UID: \"d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-l9rj2" Oct 30 13:02:35.509968 kubelet[2714]: I1030 13:02:35.509902 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ldd\" (UniqueName: \"kubernetes.io/projected/d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1-kube-api-access-p8ldd\") pod \"tigera-operator-65cdcdfd6d-l9rj2\" (UID: \"d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-l9rj2" Oct 30 13:02:35.526956 containerd[1567]: time="2025-10-30T13:02:35.526915860Z" level=info msg="StartContainer for \"1d5f23232331f473f7431ab67f649a795972c86ff88823f0e1a5fad890b5cfc9\" returns successfully" Oct 30 13:02:35.748571 containerd[1567]: time="2025-10-30T13:02:35.748523406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-l9rj2,Uid:d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1,Namespace:tigera-operator,Attempt:0,}" Oct 30 13:02:35.769074 containerd[1567]: time="2025-10-30T13:02:35.769033291Z" level=info msg="connecting to shim aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4" address="unix:///run/containerd/s/5729406c1ac59550a9186cf9fd9dfb8cb8be26d8aa8188ca70394bafe79e0e0a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:35.791087 systemd[1]: Started cri-containerd-aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4.scope - libcontainer container aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4. Oct 30 13:02:35.818140 containerd[1567]: time="2025-10-30T13:02:35.818091125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-l9rj2,Uid:d3c5f4d6-a3e1-4055-9949-ec49ecceb1f1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4\"" Oct 30 13:02:35.819605 containerd[1567]: time="2025-10-30T13:02:35.819581951Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 13:02:35.842082 kubelet[2714]: E1030 13:02:35.842039 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:35.971978 kubelet[2714]: E1030 13:02:35.971725 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:36.100654 kubelet[2714]: E1030 13:02:36.100579 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:36.103677 kubelet[2714]: E1030 13:02:36.103128 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:36.103677 kubelet[2714]: E1030 13:02:36.103394 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:36.123084 kubelet[2714]: I1030 13:02:36.123040 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vkjb4" podStartSLOduration=1.123027698 podStartE2EDuration="1.123027698s" podCreationTimestamp="2025-10-30 13:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:02:36.111713549 +0000 UTC m=+8.119650235" watchObservedRunningTime="2025-10-30 13:02:36.123027698 +0000 UTC m=+8.130964304" Oct 30 13:02:37.106796 kubelet[2714]: E1030 13:02:37.106769 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:37.284524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333375508.mount: Deactivated successfully. Oct 30 13:02:37.655675 containerd[1567]: time="2025-10-30T13:02:37.655617198Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:37.656316 containerd[1567]: time="2025-10-30T13:02:37.656277448Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 30 13:02:37.656988 containerd[1567]: time="2025-10-30T13:02:37.656967699Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:37.658949 containerd[1567]: time="2025-10-30T13:02:37.658911689Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:37.659719 containerd[1567]: time="2025-10-30T13:02:37.659679821Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.84007283s" Oct 30 13:02:37.659752 containerd[1567]: time="2025-10-30T13:02:37.659716582Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 30 13:02:37.678191 containerd[1567]: time="2025-10-30T13:02:37.678164230Z" level=info msg="CreateContainer within sandbox \"aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 13:02:37.683518 containerd[1567]: time="2025-10-30T13:02:37.683000586Z" level=info msg="Container de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:37.688875 containerd[1567]: time="2025-10-30T13:02:37.688848318Z" level=info msg="CreateContainer within sandbox \"aba0e2fc7b7705eb1b58220f40c81c86b01ae638936d08fe5a4148b9ba57eee4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b\"" Oct 30 13:02:37.689571 containerd[1567]: time="2025-10-30T13:02:37.689358526Z" level=info msg="StartContainer for \"de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b\"" Oct 30 13:02:37.690198 containerd[1567]: time="2025-10-30T13:02:37.690157658Z" level=info msg="connecting to shim de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b" address="unix:///run/containerd/s/5729406c1ac59550a9186cf9fd9dfb8cb8be26d8aa8188ca70394bafe79e0e0a" protocol=ttrpc version=3 Oct 30 13:02:37.710056 systemd[1]: Started cri-containerd-de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b.scope - libcontainer container de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b. Oct 30 13:02:37.748066 containerd[1567]: time="2025-10-30T13:02:37.748021444Z" level=info msg="StartContainer for \"de16cb60ca6cfe06d153e3a50e971b6493f2dbbdbc85bedc667933a05d4d608b\" returns successfully" Oct 30 13:02:42.871626 kubelet[2714]: E1030 13:02:42.871591 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:42.898142 kubelet[2714]: I1030 13:02:42.898091 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-l9rj2" podStartSLOduration=6.057034253 podStartE2EDuration="7.898074618s" podCreationTimestamp="2025-10-30 13:02:35 +0000 UTC" firstStartedPulling="2025-10-30 13:02:35.819291626 +0000 UTC m=+7.827228232" lastFinishedPulling="2025-10-30 13:02:37.660331991 +0000 UTC m=+9.668268597" observedRunningTime="2025-10-30 13:02:38.123451559 +0000 UTC m=+10.131388165" watchObservedRunningTime="2025-10-30 13:02:42.898074618 +0000 UTC m=+14.906011184" Oct 30 13:02:43.123304 kubelet[2714]: E1030 13:02:43.123179 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:43.147591 sudo[1775]: pam_unix(sudo:session): session closed for user root Oct 30 13:02:43.150562 sshd[1774]: Connection closed by 10.0.0.1 port 42016 Oct 30 13:02:43.151344 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Oct 30 13:02:43.158898 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:42016.service: Deactivated successfully. Oct 30 13:02:43.161762 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 13:02:43.161997 systemd[1]: session-7.scope: Consumed 7.983s CPU time, 216.8M memory peak. Oct 30 13:02:43.164184 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Oct 30 13:02:43.166964 systemd-logind[1545]: Removed session 7. Oct 30 13:02:43.364765 update_engine[1550]: I20251030 13:02:43.364230 1550 update_attempter.cc:509] Updating boot flags... Oct 30 13:02:52.216709 systemd[1]: Created slice kubepods-besteffort-podc31a4c54_44d2_49eb_b1cf_9a3598ca7d85.slice - libcontainer container kubepods-besteffort-podc31a4c54_44d2_49eb_b1cf_9a3598ca7d85.slice. Oct 30 13:02:52.227954 kubelet[2714]: I1030 13:02:52.227464 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c31a4c54-44d2-49eb-b1cf-9a3598ca7d85-tigera-ca-bundle\") pod \"calico-typha-6f7b6f8fcf-dn7bc\" (UID: \"c31a4c54-44d2-49eb-b1cf-9a3598ca7d85\") " pod="calico-system/calico-typha-6f7b6f8fcf-dn7bc" Oct 30 13:02:52.227954 kubelet[2714]: I1030 13:02:52.227512 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4c74\" (UniqueName: \"kubernetes.io/projected/c31a4c54-44d2-49eb-b1cf-9a3598ca7d85-kube-api-access-t4c74\") pod \"calico-typha-6f7b6f8fcf-dn7bc\" (UID: \"c31a4c54-44d2-49eb-b1cf-9a3598ca7d85\") " pod="calico-system/calico-typha-6f7b6f8fcf-dn7bc" Oct 30 13:02:52.227954 kubelet[2714]: I1030 13:02:52.227530 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c31a4c54-44d2-49eb-b1cf-9a3598ca7d85-typha-certs\") pod \"calico-typha-6f7b6f8fcf-dn7bc\" (UID: \"c31a4c54-44d2-49eb-b1cf-9a3598ca7d85\") " pod="calico-system/calico-typha-6f7b6f8fcf-dn7bc" Oct 30 13:02:52.410090 systemd[1]: Created slice kubepods-besteffort-pod95fb08ee_6678_496b_a940_4da76dd7588a.slice - libcontainer container kubepods-besteffort-pod95fb08ee_6678_496b_a940_4da76dd7588a.slice. Oct 30 13:02:52.429332 kubelet[2714]: I1030 13:02:52.429237 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-var-run-calico\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429332 kubelet[2714]: I1030 13:02:52.429278 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-cni-bin-dir\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429332 kubelet[2714]: I1030 13:02:52.429297 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-flexvol-driver-host\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429332 kubelet[2714]: I1030 13:02:52.429314 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-lib-modules\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429332 kubelet[2714]: I1030 13:02:52.429327 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-var-lib-calico\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429584 kubelet[2714]: I1030 13:02:52.429343 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-xtables-lock\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429584 kubelet[2714]: I1030 13:02:52.429367 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95fb08ee-6678-496b-a940-4da76dd7588a-node-certs\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429584 kubelet[2714]: I1030 13:02:52.429383 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-policysync\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429584 kubelet[2714]: I1030 13:02:52.429397 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95fb08ee-6678-496b-a940-4da76dd7588a-tigera-ca-bundle\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429584 kubelet[2714]: I1030 13:02:52.429410 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-cni-net-dir\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429690 kubelet[2714]: I1030 13:02:52.429424 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95fb08ee-6678-496b-a940-4da76dd7588a-cni-log-dir\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.429690 kubelet[2714]: I1030 13:02:52.429439 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8sw\" (UniqueName: \"kubernetes.io/projected/95fb08ee-6678-496b-a940-4da76dd7588a-kube-api-access-pc8sw\") pod \"calico-node-fjx57\" (UID: \"95fb08ee-6678-496b-a940-4da76dd7588a\") " pod="calico-system/calico-node-fjx57" Oct 30 13:02:52.522655 kubelet[2714]: E1030 13:02:52.522565 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:52.523964 containerd[1567]: time="2025-10-30T13:02:52.523670548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7b6f8fcf-dn7bc,Uid:c31a4c54-44d2-49eb-b1cf-9a3598ca7d85,Namespace:calico-system,Attempt:0,}" Oct 30 13:02:52.537972 kubelet[2714]: E1030 13:02:52.534803 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.537972 kubelet[2714]: W1030 13:02:52.534831 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.537972 kubelet[2714]: E1030 13:02:52.534850 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.537972 kubelet[2714]: E1030 13:02:52.535064 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.537972 kubelet[2714]: W1030 13:02:52.535073 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.537972 kubelet[2714]: E1030 13:02:52.535082 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.545989 kubelet[2714]: E1030 13:02:52.545962 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.545989 kubelet[2714]: W1030 13:02:52.545985 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.546080 kubelet[2714]: E1030 13:02:52.546000 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.565707 containerd[1567]: time="2025-10-30T13:02:52.565667878Z" level=info msg="connecting to shim bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b" address="unix:///run/containerd/s/e45954805b04e65d1b38ace9860f7b739358004b26038e3eb4cf729e92428179" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:52.600244 kubelet[2714]: E1030 13:02:52.600114 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:02:52.604105 systemd[1]: Started cri-containerd-bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b.scope - libcontainer container bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b. Oct 30 13:02:52.623998 kubelet[2714]: E1030 13:02:52.623784 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.623998 kubelet[2714]: W1030 13:02:52.623804 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.623998 kubelet[2714]: E1030 13:02:52.623822 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.624306 kubelet[2714]: E1030 13:02:52.624166 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.624306 kubelet[2714]: W1030 13:02:52.624179 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.624306 kubelet[2714]: E1030 13:02:52.624218 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.624452 kubelet[2714]: E1030 13:02:52.624439 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.624509 kubelet[2714]: W1030 13:02:52.624497 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.624561 kubelet[2714]: E1030 13:02:52.624550 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.624858 kubelet[2714]: E1030 13:02:52.624756 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.624858 kubelet[2714]: W1030 13:02:52.624767 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.624858 kubelet[2714]: E1030 13:02:52.624777 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.625018 kubelet[2714]: E1030 13:02:52.625006 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.625078 kubelet[2714]: W1030 13:02:52.625065 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.625128 kubelet[2714]: E1030 13:02:52.625117 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.625394 kubelet[2714]: E1030 13:02:52.625295 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.625394 kubelet[2714]: W1030 13:02:52.625306 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.625394 kubelet[2714]: E1030 13:02:52.625319 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.625534 kubelet[2714]: E1030 13:02:52.625522 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.625601 kubelet[2714]: W1030 13:02:52.625588 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.625656 kubelet[2714]: E1030 13:02:52.625643 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.625953 kubelet[2714]: E1030 13:02:52.625837 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.625953 kubelet[2714]: W1030 13:02:52.625849 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.625953 kubelet[2714]: E1030 13:02:52.625858 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.626096 kubelet[2714]: E1030 13:02:52.626084 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.626236 kubelet[2714]: W1030 13:02:52.626142 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.626236 kubelet[2714]: E1030 13:02:52.626158 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.626351 kubelet[2714]: E1030 13:02:52.626339 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.626401 kubelet[2714]: W1030 13:02:52.626390 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.626446 kubelet[2714]: E1030 13:02:52.626437 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.626709 kubelet[2714]: E1030 13:02:52.626615 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.626709 kubelet[2714]: W1030 13:02:52.626627 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.626709 kubelet[2714]: E1030 13:02:52.626635 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.626852 kubelet[2714]: E1030 13:02:52.626839 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.627012 kubelet[2714]: W1030 13:02:52.626892 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.627012 kubelet[2714]: E1030 13:02:52.626907 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.627133 kubelet[2714]: E1030 13:02:52.627121 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.627399 kubelet[2714]: W1030 13:02:52.627307 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.627399 kubelet[2714]: E1030 13:02:52.627325 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.627607 kubelet[2714]: E1030 13:02:52.627593 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.627678 kubelet[2714]: W1030 13:02:52.627666 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.627804 kubelet[2714]: E1030 13:02:52.627718 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.627981 kubelet[2714]: E1030 13:02:52.627959 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.628240 kubelet[2714]: W1030 13:02:52.628040 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.628240 kubelet[2714]: E1030 13:02:52.628167 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.628456 kubelet[2714]: E1030 13:02:52.628440 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.628723 kubelet[2714]: W1030 13:02:52.628606 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.628723 kubelet[2714]: E1030 13:02:52.628627 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.628860 kubelet[2714]: E1030 13:02:52.628846 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.629567 kubelet[2714]: W1030 13:02:52.629457 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.629567 kubelet[2714]: E1030 13:02:52.629481 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.629794 kubelet[2714]: E1030 13:02:52.629693 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.629794 kubelet[2714]: W1030 13:02:52.629705 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.629794 kubelet[2714]: E1030 13:02:52.629715 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.630038 kubelet[2714]: E1030 13:02:52.629935 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.630038 kubelet[2714]: W1030 13:02:52.629947 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.630038 kubelet[2714]: E1030 13:02:52.629956 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.630259 kubelet[2714]: E1030 13:02:52.630173 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.630259 kubelet[2714]: W1030 13:02:52.630184 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.630259 kubelet[2714]: E1030 13:02:52.630193 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.631397 kubelet[2714]: E1030 13:02:52.631329 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.631397 kubelet[2714]: W1030 13:02:52.631347 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.631397 kubelet[2714]: E1030 13:02:52.631360 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.631397 kubelet[2714]: I1030 13:02:52.631382 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b490fe4-c844-4da4-8382-9773dc4546ac-kubelet-dir\") pod \"csi-node-driver-22knm\" (UID: \"9b490fe4-c844-4da4-8382-9773dc4546ac\") " pod="calico-system/csi-node-driver-22knm" Oct 30 13:02:52.632032 kubelet[2714]: E1030 13:02:52.631543 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632032 kubelet[2714]: W1030 13:02:52.631554 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632032 kubelet[2714]: E1030 13:02:52.631564 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.632032 kubelet[2714]: I1030 13:02:52.631588 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t68g6\" (UniqueName: \"kubernetes.io/projected/9b490fe4-c844-4da4-8382-9773dc4546ac-kube-api-access-t68g6\") pod \"csi-node-driver-22knm\" (UID: \"9b490fe4-c844-4da4-8382-9773dc4546ac\") " pod="calico-system/csi-node-driver-22knm" Oct 30 13:02:52.632032 kubelet[2714]: E1030 13:02:52.631763 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632032 kubelet[2714]: W1030 13:02:52.631772 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632032 kubelet[2714]: E1030 13:02:52.631803 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.632032 kubelet[2714]: I1030 13:02:52.631986 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b490fe4-c844-4da4-8382-9773dc4546ac-registration-dir\") pod \"csi-node-driver-22knm\" (UID: \"9b490fe4-c844-4da4-8382-9773dc4546ac\") " pod="calico-system/csi-node-driver-22knm" Oct 30 13:02:52.632312 kubelet[2714]: E1030 13:02:52.632256 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632312 kubelet[2714]: W1030 13:02:52.632272 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632312 kubelet[2714]: E1030 13:02:52.632284 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.632538 kubelet[2714]: E1030 13:02:52.632475 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632538 kubelet[2714]: W1030 13:02:52.632486 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632538 kubelet[2714]: E1030 13:02:52.632494 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.632781 kubelet[2714]: E1030 13:02:52.632729 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632781 kubelet[2714]: W1030 13:02:52.632740 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632781 kubelet[2714]: E1030 13:02:52.632748 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.632971 kubelet[2714]: E1030 13:02:52.632885 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.632971 kubelet[2714]: W1030 13:02:52.632895 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.632971 kubelet[2714]: E1030 13:02:52.632903 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.633955 kubelet[2714]: E1030 13:02:52.633085 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.633955 kubelet[2714]: W1030 13:02:52.633098 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.633955 kubelet[2714]: E1030 13:02:52.633106 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.633955 kubelet[2714]: I1030 13:02:52.633127 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b490fe4-c844-4da4-8382-9773dc4546ac-varrun\") pod \"csi-node-driver-22knm\" (UID: \"9b490fe4-c844-4da4-8382-9773dc4546ac\") " pod="calico-system/csi-node-driver-22knm" Oct 30 13:02:52.633955 kubelet[2714]: E1030 13:02:52.633275 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.633955 kubelet[2714]: W1030 13:02:52.633288 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.633955 kubelet[2714]: E1030 13:02:52.633302 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.633955 kubelet[2714]: E1030 13:02:52.633472 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.633955 kubelet[2714]: W1030 13:02:52.633481 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.633508 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.633655 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.634280 kubelet[2714]: W1030 13:02:52.633663 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.633672 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.634280 kubelet[2714]: I1030 13:02:52.633703 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b490fe4-c844-4da4-8382-9773dc4546ac-socket-dir\") pod \"csi-node-driver-22knm\" (UID: \"9b490fe4-c844-4da4-8382-9773dc4546ac\") " pod="calico-system/csi-node-driver-22knm" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.633864 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.634280 kubelet[2714]: W1030 13:02:52.633874 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.633892 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.634280 kubelet[2714]: E1030 13:02:52.634029 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.634443 kubelet[2714]: W1030 13:02:52.634038 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634443 kubelet[2714]: E1030 13:02:52.634045 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.634522 kubelet[2714]: E1030 13:02:52.634511 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.634522 kubelet[2714]: W1030 13:02:52.634520 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634588 kubelet[2714]: E1030 13:02:52.634528 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.634656 kubelet[2714]: E1030 13:02:52.634646 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.634656 kubelet[2714]: W1030 13:02:52.634656 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.634704 kubelet[2714]: E1030 13:02:52.634664 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.650471 containerd[1567]: time="2025-10-30T13:02:52.650441142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f7b6f8fcf-dn7bc,Uid:c31a4c54-44d2-49eb-b1cf-9a3598ca7d85,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b\"" Oct 30 13:02:52.651368 kubelet[2714]: E1030 13:02:52.651351 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:52.652971 containerd[1567]: time="2025-10-30T13:02:52.652943517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 13:02:52.715777 kubelet[2714]: E1030 13:02:52.715518 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:52.716228 containerd[1567]: time="2025-10-30T13:02:52.716161292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fjx57,Uid:95fb08ee-6678-496b-a940-4da76dd7588a,Namespace:calico-system,Attempt:0,}" Oct 30 13:02:52.735030 kubelet[2714]: E1030 13:02:52.735003 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.735030 kubelet[2714]: W1030 13:02:52.735026 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.735138 kubelet[2714]: E1030 13:02:52.735044 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.735264 kubelet[2714]: E1030 13:02:52.735248 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.735264 kubelet[2714]: W1030 13:02:52.735261 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.735317 kubelet[2714]: E1030 13:02:52.735271 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.735507 kubelet[2714]: E1030 13:02:52.735486 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.735541 kubelet[2714]: W1030 13:02:52.735508 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.735541 kubelet[2714]: E1030 13:02:52.735524 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.735785 kubelet[2714]: E1030 13:02:52.735772 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.735785 kubelet[2714]: W1030 13:02:52.735784 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.735841 kubelet[2714]: E1030 13:02:52.735794 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.735863 containerd[1567]: time="2025-10-30T13:02:52.735820569Z" level=info msg="connecting to shim c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916" address="unix:///run/containerd/s/d0c7ef683d04d25301fc0ffba6cd79c9f11d17aa910750519c1d4e98a0c1e74a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:02:52.735958 kubelet[2714]: E1030 13:02:52.735936 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.735958 kubelet[2714]: W1030 13:02:52.735947 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.735958 kubelet[2714]: E1030 13:02:52.735955 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.736207 kubelet[2714]: E1030 13:02:52.736194 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.736242 kubelet[2714]: W1030 13:02:52.736207 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.736242 kubelet[2714]: E1030 13:02:52.736218 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.736441 kubelet[2714]: E1030 13:02:52.736426 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.736441 kubelet[2714]: W1030 13:02:52.736438 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.736505 kubelet[2714]: E1030 13:02:52.736448 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.736627 kubelet[2714]: E1030 13:02:52.736616 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.736627 kubelet[2714]: W1030 13:02:52.736626 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.736678 kubelet[2714]: E1030 13:02:52.736635 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.736869 kubelet[2714]: E1030 13:02:52.736856 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.736869 kubelet[2714]: W1030 13:02:52.736868 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.736966 kubelet[2714]: E1030 13:02:52.736879 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.737107 kubelet[2714]: E1030 13:02:52.737095 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.737137 kubelet[2714]: W1030 13:02:52.737107 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.737137 kubelet[2714]: E1030 13:02:52.737117 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.737309 kubelet[2714]: E1030 13:02:52.737296 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.737309 kubelet[2714]: W1030 13:02:52.737308 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.737363 kubelet[2714]: E1030 13:02:52.737319 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.737482 kubelet[2714]: E1030 13:02:52.737471 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.737482 kubelet[2714]: W1030 13:02:52.737481 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.737532 kubelet[2714]: E1030 13:02:52.737489 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.737656 kubelet[2714]: E1030 13:02:52.737646 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.737685 kubelet[2714]: W1030 13:02:52.737656 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.737685 kubelet[2714]: E1030 13:02:52.737666 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.737847 kubelet[2714]: E1030 13:02:52.737836 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.737847 kubelet[2714]: W1030 13:02:52.737847 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.737896 kubelet[2714]: E1030 13:02:52.737856 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.738252 kubelet[2714]: E1030 13:02:52.738239 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.738252 kubelet[2714]: W1030 13:02:52.738252 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.738317 kubelet[2714]: E1030 13:02:52.738263 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.738435 kubelet[2714]: E1030 13:02:52.738424 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.738435 kubelet[2714]: W1030 13:02:52.738435 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.738484 kubelet[2714]: E1030 13:02:52.738444 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.738689 kubelet[2714]: E1030 13:02:52.738675 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.738763 kubelet[2714]: W1030 13:02:52.738742 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.738836 kubelet[2714]: E1030 13:02:52.738818 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.739092 kubelet[2714]: E1030 13:02:52.739075 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.739092 kubelet[2714]: W1030 13:02:52.739087 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.739092 kubelet[2714]: E1030 13:02:52.739096 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.739258 kubelet[2714]: E1030 13:02:52.739245 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.739258 kubelet[2714]: W1030 13:02:52.739255 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.739318 kubelet[2714]: E1030 13:02:52.739262 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.739500 kubelet[2714]: E1030 13:02:52.739483 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.739500 kubelet[2714]: W1030 13:02:52.739498 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.739555 kubelet[2714]: E1030 13:02:52.739510 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.740075 kubelet[2714]: E1030 13:02:52.740055 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.740179 kubelet[2714]: W1030 13:02:52.740146 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.740296 kubelet[2714]: E1030 13:02:52.740181 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.740508 kubelet[2714]: E1030 13:02:52.740494 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.740508 kubelet[2714]: W1030 13:02:52.740508 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.740569 kubelet[2714]: E1030 13:02:52.740519 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.741066 kubelet[2714]: E1030 13:02:52.741049 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.741111 kubelet[2714]: W1030 13:02:52.741064 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.741111 kubelet[2714]: E1030 13:02:52.741097 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.741305 kubelet[2714]: E1030 13:02:52.741293 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.741333 kubelet[2714]: W1030 13:02:52.741305 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.741333 kubelet[2714]: E1030 13:02:52.741323 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.741501 kubelet[2714]: E1030 13:02:52.741489 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.741530 kubelet[2714]: W1030 13:02:52.741520 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.741558 kubelet[2714]: E1030 13:02:52.741532 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.749145 kubelet[2714]: E1030 13:02:52.749128 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:52.749307 kubelet[2714]: W1030 13:02:52.749246 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:52.749307 kubelet[2714]: E1030 13:02:52.749275 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:52.762086 systemd[1]: Started cri-containerd-c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916.scope - libcontainer container c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916. Oct 30 13:02:52.796978 containerd[1567]: time="2025-10-30T13:02:52.795081242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fjx57,Uid:95fb08ee-6678-496b-a940-4da76dd7588a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\"" Oct 30 13:02:52.798904 kubelet[2714]: E1030 13:02:52.798878 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:54.071496 kubelet[2714]: E1030 13:02:54.071103 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:02:56.071360 kubelet[2714]: E1030 13:02:56.071247 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:02:57.168883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456973583.mount: Deactivated successfully. Oct 30 13:02:58.073404 kubelet[2714]: E1030 13:02:58.073339 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:02:58.822621 containerd[1567]: time="2025-10-30T13:02:58.822567651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:58.823932 containerd[1567]: time="2025-10-30T13:02:58.823890856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 30 13:02:58.824705 containerd[1567]: time="2025-10-30T13:02:58.824684660Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:58.826600 containerd[1567]: time="2025-10-30T13:02:58.826566267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:02:58.827400 containerd[1567]: time="2025-10-30T13:02:58.827095029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 6.174120272s" Oct 30 13:02:58.827400 containerd[1567]: time="2025-10-30T13:02:58.827127790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 30 13:02:58.828290 containerd[1567]: time="2025-10-30T13:02:58.828266194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 13:02:58.845063 containerd[1567]: time="2025-10-30T13:02:58.845024142Z" level=info msg="CreateContainer within sandbox \"bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 13:02:58.852172 containerd[1567]: time="2025-10-30T13:02:58.852119650Z" level=info msg="Container 848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:02:58.855825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275933005.mount: Deactivated successfully. Oct 30 13:02:58.859666 containerd[1567]: time="2025-10-30T13:02:58.859632841Z" level=info msg="CreateContainer within sandbox \"bbb03ce6b9032f7c3f88ed49e72d9e95624c18fc328876ebc305804f66c68f5b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b\"" Oct 30 13:02:58.860125 containerd[1567]: time="2025-10-30T13:02:58.860100483Z" level=info msg="StartContainer for \"848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b\"" Oct 30 13:02:58.861710 containerd[1567]: time="2025-10-30T13:02:58.861663249Z" level=info msg="connecting to shim 848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b" address="unix:///run/containerd/s/e45954805b04e65d1b38ace9860f7b739358004b26038e3eb4cf729e92428179" protocol=ttrpc version=3 Oct 30 13:02:58.891086 systemd[1]: Started cri-containerd-848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b.scope - libcontainer container 848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b. Oct 30 13:02:58.932638 containerd[1567]: time="2025-10-30T13:02:58.932601495Z" level=info msg="StartContainer for \"848f9ab50934ec95d59e644ab895f25c93b5bb0b24a998f2e5b4ab1f61b26d0b\" returns successfully" Oct 30 13:02:59.162868 kubelet[2714]: E1030 13:02:59.162740 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:02:59.172781 kubelet[2714]: E1030 13:02:59.172679 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.172781 kubelet[2714]: W1030 13:02:59.172698 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.172781 kubelet[2714]: E1030 13:02:59.172715 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.172965 kubelet[2714]: E1030 13:02:59.172848 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.172965 kubelet[2714]: W1030 13:02:59.172854 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.172965 kubelet[2714]: E1030 13:02:59.172884 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.173071 kubelet[2714]: E1030 13:02:59.173040 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.173071 kubelet[2714]: W1030 13:02:59.173049 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.173071 kubelet[2714]: E1030 13:02:59.173058 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.173772 kubelet[2714]: E1030 13:02:59.173742 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.173772 kubelet[2714]: W1030 13:02:59.173761 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.173861 kubelet[2714]: E1030 13:02:59.173778 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.174379 kubelet[2714]: E1030 13:02:59.173962 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.174379 kubelet[2714]: W1030 13:02:59.173977 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.174379 kubelet[2714]: E1030 13:02:59.173986 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.175036 kubelet[2714]: I1030 13:02:59.174978 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f7b6f8fcf-dn7bc" podStartSLOduration=0.999029388 podStartE2EDuration="7.174776829s" podCreationTimestamp="2025-10-30 13:02:52 +0000 UTC" firstStartedPulling="2025-10-30 13:02:52.652404593 +0000 UTC m=+24.660341199" lastFinishedPulling="2025-10-30 13:02:58.828152034 +0000 UTC m=+30.836088640" observedRunningTime="2025-10-30 13:02:59.174706268 +0000 UTC m=+31.182642874" watchObservedRunningTime="2025-10-30 13:02:59.174776829 +0000 UTC m=+31.182713395" Oct 30 13:02:59.175332 kubelet[2714]: E1030 13:02:59.175190 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.175332 kubelet[2714]: W1030 13:02:59.175203 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.175332 kubelet[2714]: E1030 13:02:59.175215 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.175332 kubelet[2714]: E1030 13:02:59.175403 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.175332 kubelet[2714]: W1030 13:02:59.175412 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.175332 kubelet[2714]: E1030 13:02:59.175420 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.175763 kubelet[2714]: E1030 13:02:59.175652 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.175763 kubelet[2714]: W1030 13:02:59.175661 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.175763 kubelet[2714]: E1030 13:02:59.175670 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176031 kubelet[2714]: E1030 13:02:59.175887 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176031 kubelet[2714]: W1030 13:02:59.175898 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176031 kubelet[2714]: E1030 13:02:59.175907 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176098 kubelet[2714]: E1030 13:02:59.176057 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176098 kubelet[2714]: W1030 13:02:59.176067 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176098 kubelet[2714]: E1030 13:02:59.176075 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176199 kubelet[2714]: E1030 13:02:59.176186 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176199 kubelet[2714]: W1030 13:02:59.176195 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176243 kubelet[2714]: E1030 13:02:59.176203 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176327 kubelet[2714]: E1030 13:02:59.176316 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176348 kubelet[2714]: W1030 13:02:59.176327 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176348 kubelet[2714]: E1030 13:02:59.176334 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176514 kubelet[2714]: E1030 13:02:59.176501 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176514 kubelet[2714]: W1030 13:02:59.176512 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176564 kubelet[2714]: E1030 13:02:59.176522 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176676 kubelet[2714]: E1030 13:02:59.176663 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176676 kubelet[2714]: W1030 13:02:59.176675 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176727 kubelet[2714]: E1030 13:02:59.176700 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.176838 kubelet[2714]: E1030 13:02:59.176827 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.176838 kubelet[2714]: W1030 13:02:59.176837 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.176881 kubelet[2714]: E1030 13:02:59.176845 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.183208 kubelet[2714]: E1030 13:02:59.183179 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.183208 kubelet[2714]: W1030 13:02:59.183198 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.183208 kubelet[2714]: E1030 13:02:59.183212 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.183437 kubelet[2714]: E1030 13:02:59.183423 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.183437 kubelet[2714]: W1030 13:02:59.183436 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.183498 kubelet[2714]: E1030 13:02:59.183445 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.183634 kubelet[2714]: E1030 13:02:59.183623 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.183634 kubelet[2714]: W1030 13:02:59.183634 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.183694 kubelet[2714]: E1030 13:02:59.183643 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.183854 kubelet[2714]: E1030 13:02:59.183842 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.183854 kubelet[2714]: W1030 13:02:59.183852 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.183905 kubelet[2714]: E1030 13:02:59.183861 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184022 kubelet[2714]: E1030 13:02:59.184011 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184022 kubelet[2714]: W1030 13:02:59.184021 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.184076 kubelet[2714]: E1030 13:02:59.184029 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184163 kubelet[2714]: E1030 13:02:59.184152 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184163 kubelet[2714]: W1030 13:02:59.184161 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.184209 kubelet[2714]: E1030 13:02:59.184169 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184333 kubelet[2714]: E1030 13:02:59.184322 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184356 kubelet[2714]: W1030 13:02:59.184333 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.184356 kubelet[2714]: E1030 13:02:59.184342 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184626 kubelet[2714]: E1030 13:02:59.184594 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184626 kubelet[2714]: W1030 13:02:59.184613 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.184626 kubelet[2714]: E1030 13:02:59.184626 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184774 kubelet[2714]: E1030 13:02:59.184763 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184774 kubelet[2714]: W1030 13:02:59.184772 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.184833 kubelet[2714]: E1030 13:02:59.184781 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.184942 kubelet[2714]: E1030 13:02:59.184930 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.184942 kubelet[2714]: W1030 13:02:59.184941 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.185035 kubelet[2714]: E1030 13:02:59.184949 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.185088 kubelet[2714]: E1030 13:02:59.185074 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.185088 kubelet[2714]: W1030 13:02:59.185085 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.185142 kubelet[2714]: E1030 13:02:59.185095 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.185270 kubelet[2714]: E1030 13:02:59.185258 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.185270 kubelet[2714]: W1030 13:02:59.185269 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.185329 kubelet[2714]: E1030 13:02:59.185277 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.185432 kubelet[2714]: E1030 13:02:59.185422 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.185432 kubelet[2714]: W1030 13:02:59.185431 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.185498 kubelet[2714]: E1030 13:02:59.185439 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.185713 kubelet[2714]: E1030 13:02:59.185697 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.185772 kubelet[2714]: W1030 13:02:59.185759 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.185829 kubelet[2714]: E1030 13:02:59.185818 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.186058 kubelet[2714]: E1030 13:02:59.186044 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.186161 kubelet[2714]: W1030 13:02:59.186146 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.186228 kubelet[2714]: E1030 13:02:59.186213 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.186707 kubelet[2714]: E1030 13:02:59.186442 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.186707 kubelet[2714]: W1030 13:02:59.186454 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.186707 kubelet[2714]: E1030 13:02:59.186464 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.186707 kubelet[2714]: E1030 13:02:59.186702 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.186707 kubelet[2714]: W1030 13:02:59.186714 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.186855 kubelet[2714]: E1030 13:02:59.186724 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:02:59.186879 kubelet[2714]: E1030 13:02:59.186869 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:02:59.186879 kubelet[2714]: W1030 13:02:59.186877 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:02:59.186936 kubelet[2714]: E1030 13:02:59.186885 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.071199 kubelet[2714]: E1030 13:03:00.071146 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:00.108755 containerd[1567]: time="2025-10-30T13:03:00.108702336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:00.109402 containerd[1567]: time="2025-10-30T13:03:00.109376219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 30 13:03:00.111934 containerd[1567]: time="2025-10-30T13:03:00.111712667Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:00.113648 containerd[1567]: time="2025-10-30T13:03:00.113598794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:00.114426 containerd[1567]: time="2025-10-30T13:03:00.114134196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.285839122s" Oct 30 13:03:00.114426 containerd[1567]: time="2025-10-30T13:03:00.114168036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 30 13:03:00.118700 containerd[1567]: time="2025-10-30T13:03:00.118672852Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 13:03:00.126969 containerd[1567]: time="2025-10-30T13:03:00.126917001Z" level=info msg="Container b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:03:00.133485 containerd[1567]: time="2025-10-30T13:03:00.133436704Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\"" Oct 30 13:03:00.133960 containerd[1567]: time="2025-10-30T13:03:00.133897906Z" level=info msg="StartContainer for \"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\"" Oct 30 13:03:00.136783 containerd[1567]: time="2025-10-30T13:03:00.136751876Z" level=info msg="connecting to shim b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516" address="unix:///run/containerd/s/d0c7ef683d04d25301fc0ffba6cd79c9f11d17aa910750519c1d4e98a0c1e74a" protocol=ttrpc version=3 Oct 30 13:03:00.161099 systemd[1]: Started cri-containerd-b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516.scope - libcontainer container b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516. Oct 30 13:03:00.166134 kubelet[2714]: I1030 13:03:00.166094 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:03:00.167308 kubelet[2714]: E1030 13:03:00.166913 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:00.183454 kubelet[2714]: E1030 13:03:00.183344 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.183454 kubelet[2714]: W1030 13:03:00.183362 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.183454 kubelet[2714]: E1030 13:03:00.183379 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.184686 kubelet[2714]: E1030 13:03:00.184574 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.184686 kubelet[2714]: W1030 13:03:00.184601 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.184686 kubelet[2714]: E1030 13:03:00.184615 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.185069 kubelet[2714]: E1030 13:03:00.184962 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.185069 kubelet[2714]: W1030 13:03:00.184975 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.185069 kubelet[2714]: E1030 13:03:00.184985 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.185218 kubelet[2714]: E1030 13:03:00.185206 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.185270 kubelet[2714]: W1030 13:03:00.185259 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.185322 kubelet[2714]: E1030 13:03:00.185311 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.185541 kubelet[2714]: E1030 13:03:00.185527 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.185624 kubelet[2714]: W1030 13:03:00.185611 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.185681 kubelet[2714]: E1030 13:03:00.185670 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.185887 kubelet[2714]: E1030 13:03:00.185873 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.185992 kubelet[2714]: W1030 13:03:00.185978 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.186178 kubelet[2714]: E1030 13:03:00.186076 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.186285 kubelet[2714]: E1030 13:03:00.186272 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.186337 kubelet[2714]: W1030 13:03:00.186325 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.186389 kubelet[2714]: E1030 13:03:00.186379 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.186600 kubelet[2714]: E1030 13:03:00.186574 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.186686 kubelet[2714]: W1030 13:03:00.186672 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.186750 kubelet[2714]: E1030 13:03:00.186738 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.187109 kubelet[2714]: E1030 13:03:00.187020 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.187109 kubelet[2714]: W1030 13:03:00.187032 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.187109 kubelet[2714]: E1030 13:03:00.187042 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.187265 kubelet[2714]: E1030 13:03:00.187253 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.187401 kubelet[2714]: W1030 13:03:00.187305 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.187401 kubelet[2714]: E1030 13:03:00.187319 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.187553 kubelet[2714]: E1030 13:03:00.187540 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.187713 kubelet[2714]: W1030 13:03:00.187608 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.187713 kubelet[2714]: E1030 13:03:00.187624 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.187847 kubelet[2714]: E1030 13:03:00.187835 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.187898 kubelet[2714]: W1030 13:03:00.187888 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.187977 kubelet[2714]: E1030 13:03:00.187964 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.188298 kubelet[2714]: E1030 13:03:00.188205 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.188298 kubelet[2714]: W1030 13:03:00.188216 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.188298 kubelet[2714]: E1030 13:03:00.188225 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.188452 kubelet[2714]: E1030 13:03:00.188440 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.188501 kubelet[2714]: W1030 13:03:00.188490 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.188549 kubelet[2714]: E1030 13:03:00.188539 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.188888 kubelet[2714]: E1030 13:03:00.188807 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.188888 kubelet[2714]: W1030 13:03:00.188819 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.188888 kubelet[2714]: E1030 13:03:00.188829 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.194329 kubelet[2714]: E1030 13:03:00.194228 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.194329 kubelet[2714]: W1030 13:03:00.194244 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.194329 kubelet[2714]: E1030 13:03:00.194256 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.194462 kubelet[2714]: E1030 13:03:00.194452 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.194462 kubelet[2714]: W1030 13:03:00.194460 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.194506 kubelet[2714]: E1030 13:03:00.194470 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.194671 kubelet[2714]: E1030 13:03:00.194651 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.194671 kubelet[2714]: W1030 13:03:00.194670 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.194737 kubelet[2714]: E1030 13:03:00.194680 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.194937 kubelet[2714]: E1030 13:03:00.194897 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.194937 kubelet[2714]: W1030 13:03:00.194912 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.194937 kubelet[2714]: E1030 13:03:00.194939 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.195339 kubelet[2714]: E1030 13:03:00.195100 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.195339 kubelet[2714]: W1030 13:03:00.195109 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.195339 kubelet[2714]: E1030 13:03:00.195118 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.195339 kubelet[2714]: E1030 13:03:00.195268 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.195339 kubelet[2714]: W1030 13:03:00.195276 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.195339 kubelet[2714]: E1030 13:03:00.195284 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.195472 kubelet[2714]: E1030 13:03:00.195443 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.195472 kubelet[2714]: W1030 13:03:00.195452 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.195472 kubelet[2714]: E1030 13:03:00.195461 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.195892 kubelet[2714]: E1030 13:03:00.195768 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.195892 kubelet[2714]: W1030 13:03:00.195792 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.195892 kubelet[2714]: E1030 13:03:00.195806 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.196243 kubelet[2714]: E1030 13:03:00.196184 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.196243 kubelet[2714]: W1030 13:03:00.196197 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.196243 kubelet[2714]: E1030 13:03:00.196208 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.196536 kubelet[2714]: E1030 13:03:00.196502 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.196536 kubelet[2714]: W1030 13:03:00.196514 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.196536 kubelet[2714]: E1030 13:03:00.196524 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.197136 kubelet[2714]: E1030 13:03:00.197118 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.197291 kubelet[2714]: W1030 13:03:00.197195 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.197291 kubelet[2714]: E1030 13:03:00.197211 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.197716 kubelet[2714]: E1030 13:03:00.197591 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.197716 kubelet[2714]: W1030 13:03:00.197606 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.197716 kubelet[2714]: E1030 13:03:00.197616 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.198058 kubelet[2714]: E1030 13:03:00.198042 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.198304 kubelet[2714]: W1030 13:03:00.198262 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.198304 kubelet[2714]: E1030 13:03:00.198288 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.199285 kubelet[2714]: E1030 13:03:00.198986 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.199285 kubelet[2714]: W1030 13:03:00.199002 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.199285 kubelet[2714]: E1030 13:03:00.199014 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.199499 kubelet[2714]: E1030 13:03:00.199484 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.199907 kubelet[2714]: W1030 13:03:00.199765 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.199907 kubelet[2714]: E1030 13:03:00.199791 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.200162 kubelet[2714]: E1030 13:03:00.200150 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.200501 kubelet[2714]: W1030 13:03:00.200414 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.200501 kubelet[2714]: E1030 13:03:00.200433 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.200701 kubelet[2714]: E1030 13:03:00.200667 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.200701 kubelet[2714]: W1030 13:03:00.200682 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.200701 kubelet[2714]: E1030 13:03:00.200694 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.201282 kubelet[2714]: E1030 13:03:00.201259 2714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 13:03:00.201518 kubelet[2714]: W1030 13:03:00.201367 2714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 13:03:00.201518 kubelet[2714]: E1030 13:03:00.201406 2714 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 13:03:00.201712 containerd[1567]: time="2025-10-30T13:03:00.201674866Z" level=info msg="StartContainer for \"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\" returns successfully" Oct 30 13:03:00.211315 systemd[1]: cri-containerd-b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516.scope: Deactivated successfully. Oct 30 13:03:00.228913 containerd[1567]: time="2025-10-30T13:03:00.228858323Z" level=info msg="received exit event container_id:\"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\" id:\"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\" pid:3425 exited_at:{seconds:1761829380 nanos:224496507}" Oct 30 13:03:00.235757 containerd[1567]: time="2025-10-30T13:03:00.235129705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\" id:\"b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516\" pid:3425 exited_at:{seconds:1761829380 nanos:224496507}" Oct 30 13:03:00.265266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9d4465662c0cc47bfeddb52ded29524b3fb960e3a5173a2421829504140c516-rootfs.mount: Deactivated successfully. Oct 30 13:03:01.172877 kubelet[2714]: E1030 13:03:01.172816 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:01.174293 containerd[1567]: time="2025-10-30T13:03:01.174261918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 13:03:02.071074 kubelet[2714]: E1030 13:03:02.071028 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:02.121774 kubelet[2714]: I1030 13:03:02.121522 2714 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 13:03:02.122197 kubelet[2714]: E1030 13:03:02.122177 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:02.174862 kubelet[2714]: E1030 13:03:02.174752 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:04.070882 kubelet[2714]: E1030 13:03:04.070812 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:05.359998 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:39638.service - OpenSSH per-connection server daemon (10.0.0.1:39638). Oct 30 13:03:05.434996 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 39638 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:05.435912 sshd-session[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:05.440243 systemd-logind[1545]: New session 8 of user core. Oct 30 13:03:05.451077 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 13:03:05.547790 sshd[3504]: Connection closed by 10.0.0.1 port 39638 Oct 30 13:03:05.549098 sshd-session[3501]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:05.553394 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Oct 30 13:03:05.553616 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:39638.service: Deactivated successfully. Oct 30 13:03:05.556546 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 13:03:05.560535 systemd-logind[1545]: Removed session 8. Oct 30 13:03:06.071096 kubelet[2714]: E1030 13:03:06.071050 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:06.488906 containerd[1567]: time="2025-10-30T13:03:06.488864070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:06.489971 containerd[1567]: time="2025-10-30T13:03:06.489525632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 30 13:03:06.490352 containerd[1567]: time="2025-10-30T13:03:06.490326353Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:06.492421 containerd[1567]: time="2025-10-30T13:03:06.492396078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:06.492994 containerd[1567]: time="2025-10-30T13:03:06.492953400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 5.318310401s" Oct 30 13:03:06.492994 containerd[1567]: time="2025-10-30T13:03:06.492984080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 30 13:03:06.497737 containerd[1567]: time="2025-10-30T13:03:06.497697251Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 13:03:06.504866 containerd[1567]: time="2025-10-30T13:03:06.503814746Z" level=info msg="Container 06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:03:06.511731 containerd[1567]: time="2025-10-30T13:03:06.511678605Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\"" Oct 30 13:03:06.512159 containerd[1567]: time="2025-10-30T13:03:06.512042486Z" level=info msg="StartContainer for \"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\"" Oct 30 13:03:06.513661 containerd[1567]: time="2025-10-30T13:03:06.513614330Z" level=info msg="connecting to shim 06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987" address="unix:///run/containerd/s/d0c7ef683d04d25301fc0ffba6cd79c9f11d17aa910750519c1d4e98a0c1e74a" protocol=ttrpc version=3 Oct 30 13:03:06.535088 systemd[1]: Started cri-containerd-06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987.scope - libcontainer container 06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987. Oct 30 13:03:06.573419 containerd[1567]: time="2025-10-30T13:03:06.573318073Z" level=info msg="StartContainer for \"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\" returns successfully" Oct 30 13:03:07.104049 systemd[1]: cri-containerd-06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987.scope: Deactivated successfully. Oct 30 13:03:07.104605 systemd[1]: cri-containerd-06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987.scope: Consumed 457ms CPU time, 178.6M memory peak, 1.7M read from disk, 165.9M written to disk. Oct 30 13:03:07.106976 containerd[1567]: time="2025-10-30T13:03:07.105974940Z" level=info msg="received exit event container_id:\"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\" id:\"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\" pid:3540 exited_at:{seconds:1761829387 nanos:105745140}" Oct 30 13:03:07.106976 containerd[1567]: time="2025-10-30T13:03:07.106071780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\" id:\"06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987\" pid:3540 exited_at:{seconds:1761829387 nanos:105745140}" Oct 30 13:03:07.133282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06f0448c1aa87a66fa82163c46ef82bef2caf764b9ee7f28ec0267fe319d7987-rootfs.mount: Deactivated successfully. Oct 30 13:03:07.169635 kubelet[2714]: I1030 13:03:07.168941 2714 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 30 13:03:07.189305 kubelet[2714]: E1030 13:03:07.189278 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:07.239307 systemd[1]: Created slice kubepods-burstable-pod0e2ac15f_e894_4c79_a6a6_e097374e675b.slice - libcontainer container kubepods-burstable-pod0e2ac15f_e894_4c79_a6a6_e097374e675b.slice. Oct 30 13:03:07.245894 kubelet[2714]: I1030 13:03:07.244423 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7109f21d-4639-4dbe-8df7-7e79b520c4a3-calico-apiserver-certs\") pod \"calico-apiserver-68ff7b8594-mg9vz\" (UID: \"7109f21d-4639-4dbe-8df7-7e79b520c4a3\") " pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" Oct 30 13:03:07.245894 kubelet[2714]: I1030 13:03:07.244469 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed-tigera-ca-bundle\") pod \"calico-kube-controllers-6475695c54-plmcq\" (UID: \"374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed\") " pod="calico-system/calico-kube-controllers-6475695c54-plmcq" Oct 30 13:03:07.245894 kubelet[2714]: I1030 13:03:07.244486 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8q4\" (UniqueName: \"kubernetes.io/projected/944adcd1-110a-4978-ab0e-71fcac0fc798-kube-api-access-7f8q4\") pod \"coredns-66bc5c9577-djsc9\" (UID: \"944adcd1-110a-4978-ab0e-71fcac0fc798\") " pod="kube-system/coredns-66bc5c9577-djsc9" Oct 30 13:03:07.245894 kubelet[2714]: I1030 13:03:07.244500 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e2ac15f-e894-4c79-a6a6-e097374e675b-config-volume\") pod \"coredns-66bc5c9577-t9zhq\" (UID: \"0e2ac15f-e894-4c79-a6a6-e097374e675b\") " pod="kube-system/coredns-66bc5c9577-t9zhq" Oct 30 13:03:07.245894 kubelet[2714]: I1030 13:03:07.244524 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrdn\" (UniqueName: \"kubernetes.io/projected/374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed-kube-api-access-qqrdn\") pod \"calico-kube-controllers-6475695c54-plmcq\" (UID: \"374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed\") " pod="calico-system/calico-kube-controllers-6475695c54-plmcq" Oct 30 13:03:07.246105 kubelet[2714]: I1030 13:03:07.244540 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/944adcd1-110a-4978-ab0e-71fcac0fc798-config-volume\") pod \"coredns-66bc5c9577-djsc9\" (UID: \"944adcd1-110a-4978-ab0e-71fcac0fc798\") " pod="kube-system/coredns-66bc5c9577-djsc9" Oct 30 13:03:07.246105 kubelet[2714]: I1030 13:03:07.244553 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wkc5\" (UniqueName: \"kubernetes.io/projected/0e2ac15f-e894-4c79-a6a6-e097374e675b-kube-api-access-5wkc5\") pod \"coredns-66bc5c9577-t9zhq\" (UID: \"0e2ac15f-e894-4c79-a6a6-e097374e675b\") " pod="kube-system/coredns-66bc5c9577-t9zhq" Oct 30 13:03:07.246105 kubelet[2714]: I1030 13:03:07.244580 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9snx\" (UniqueName: \"kubernetes.io/projected/7109f21d-4639-4dbe-8df7-7e79b520c4a3-kube-api-access-p9snx\") pod \"calico-apiserver-68ff7b8594-mg9vz\" (UID: \"7109f21d-4639-4dbe-8df7-7e79b520c4a3\") " pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" Oct 30 13:03:07.247628 systemd[1]: Created slice kubepods-besteffort-pod374229d4_b4e3_42c9_a2fb_6c8ea2dc49ed.slice - libcontainer container kubepods-besteffort-pod374229d4_b4e3_42c9_a2fb_6c8ea2dc49ed.slice. Oct 30 13:03:07.257248 systemd[1]: Created slice kubepods-besteffort-pod7109f21d_4639_4dbe_8df7_7e79b520c4a3.slice - libcontainer container kubepods-besteffort-pod7109f21d_4639_4dbe_8df7_7e79b520c4a3.slice. Oct 30 13:03:07.262932 systemd[1]: Created slice kubepods-burstable-pod944adcd1_110a_4978_ab0e_71fcac0fc798.slice - libcontainer container kubepods-burstable-pod944adcd1_110a_4978_ab0e_71fcac0fc798.slice. Oct 30 13:03:07.267033 systemd[1]: Created slice kubepods-besteffort-pod15635502_27bd_4151_8ac0_e5341e0aec85.slice - libcontainer container kubepods-besteffort-pod15635502_27bd_4151_8ac0_e5341e0aec85.slice. Oct 30 13:03:07.275046 systemd[1]: Created slice kubepods-besteffort-pod87d5470b_5c41_47d8_8a50_5ec2b3c997cc.slice - libcontainer container kubepods-besteffort-pod87d5470b_5c41_47d8_8a50_5ec2b3c997cc.slice. Oct 30 13:03:07.283265 systemd[1]: Created slice kubepods-besteffort-pod3de0a7ad_5e52_4e0f_a02c_b97bb2482d40.slice - libcontainer container kubepods-besteffort-pod3de0a7ad_5e52_4e0f_a02c_b97bb2482d40.slice. Oct 30 13:03:07.345628 kubelet[2714]: I1030 13:03:07.345588 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwhd\" (UniqueName: \"kubernetes.io/projected/87d5470b-5c41-47d8-8a50-5ec2b3c997cc-kube-api-access-skwhd\") pod \"goldmane-7c778bb748-vjdtn\" (UID: \"87d5470b-5c41-47d8-8a50-5ec2b3c997cc\") " pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.345628 kubelet[2714]: I1030 13:03:07.345645 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3de0a7ad-5e52-4e0f-a02c-b97bb2482d40-calico-apiserver-certs\") pod \"calico-apiserver-68ff7b8594-9nrw5\" (UID: \"3de0a7ad-5e52-4e0f-a02c-b97bb2482d40\") " pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" Oct 30 13:03:07.345792 kubelet[2714]: I1030 13:03:07.345666 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm9p\" (UniqueName: \"kubernetes.io/projected/3de0a7ad-5e52-4e0f-a02c-b97bb2482d40-kube-api-access-2nm9p\") pod \"calico-apiserver-68ff7b8594-9nrw5\" (UID: \"3de0a7ad-5e52-4e0f-a02c-b97bb2482d40\") " pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" Oct 30 13:03:07.345792 kubelet[2714]: I1030 13:03:07.345712 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-ca-bundle\") pod \"whisker-889b7db54-smhsz\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " pod="calico-system/whisker-889b7db54-smhsz" Oct 30 13:03:07.345792 kubelet[2714]: I1030 13:03:07.345768 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94f9m\" (UniqueName: \"kubernetes.io/projected/15635502-27bd-4151-8ac0-e5341e0aec85-kube-api-access-94f9m\") pod \"whisker-889b7db54-smhsz\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " pod="calico-system/whisker-889b7db54-smhsz" Oct 30 13:03:07.345792 kubelet[2714]: I1030 13:03:07.345785 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-backend-key-pair\") pod \"whisker-889b7db54-smhsz\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " pod="calico-system/whisker-889b7db54-smhsz" Oct 30 13:03:07.345875 kubelet[2714]: I1030 13:03:07.345799 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d5470b-5c41-47d8-8a50-5ec2b3c997cc-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-vjdtn\" (UID: \"87d5470b-5c41-47d8-8a50-5ec2b3c997cc\") " pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.345875 kubelet[2714]: I1030 13:03:07.345816 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d5470b-5c41-47d8-8a50-5ec2b3c997cc-config\") pod \"goldmane-7c778bb748-vjdtn\" (UID: \"87d5470b-5c41-47d8-8a50-5ec2b3c997cc\") " pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.345875 kubelet[2714]: I1030 13:03:07.345831 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87d5470b-5c41-47d8-8a50-5ec2b3c997cc-goldmane-key-pair\") pod \"goldmane-7c778bb748-vjdtn\" (UID: \"87d5470b-5c41-47d8-8a50-5ec2b3c997cc\") " pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.546217 kubelet[2714]: E1030 13:03:07.546099 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:07.547036 containerd[1567]: time="2025-10-30T13:03:07.547001216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-t9zhq,Uid:0e2ac15f-e894-4c79-a6a6-e097374e675b,Namespace:kube-system,Attempt:0,}" Oct 30 13:03:07.555095 containerd[1567]: time="2025-10-30T13:03:07.555059714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6475695c54-plmcq,Uid:374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:07.566350 containerd[1567]: time="2025-10-30T13:03:07.566274899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-mg9vz,Uid:7109f21d-4639-4dbe-8df7-7e79b520c4a3,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:03:07.567246 kubelet[2714]: E1030 13:03:07.567178 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:07.573176 containerd[1567]: time="2025-10-30T13:03:07.572832514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-889b7db54-smhsz,Uid:15635502-27bd-4151-8ac0-e5341e0aec85,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:07.573176 containerd[1567]: time="2025-10-30T13:03:07.572944354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-djsc9,Uid:944adcd1-110a-4978-ab0e-71fcac0fc798,Namespace:kube-system,Attempt:0,}" Oct 30 13:03:07.583646 containerd[1567]: time="2025-10-30T13:03:07.583527178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vjdtn,Uid:87d5470b-5c41-47d8-8a50-5ec2b3c997cc,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:07.589168 containerd[1567]: time="2025-10-30T13:03:07.588852870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-9nrw5,Uid:3de0a7ad-5e52-4e0f-a02c-b97bb2482d40,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:03:07.670350 containerd[1567]: time="2025-10-30T13:03:07.670275014Z" level=error msg="Failed to destroy network for sandbox \"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.675278 containerd[1567]: time="2025-10-30T13:03:07.675231785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-mg9vz,Uid:7109f21d-4639-4dbe-8df7-7e79b520c4a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.675709 kubelet[2714]: E1030 13:03:07.675671 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.675886 kubelet[2714]: E1030 13:03:07.675867 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" Oct 30 13:03:07.676149 kubelet[2714]: E1030 13:03:07.676054 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" Oct 30 13:03:07.676260 kubelet[2714]: E1030 13:03:07.676228 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68ff7b8594-mg9vz_calico-apiserver(7109f21d-4639-4dbe-8df7-7e79b520c4a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68ff7b8594-mg9vz_calico-apiserver(7109f21d-4639-4dbe-8df7-7e79b520c4a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"011400828745e08175358ae7352b20760268b045de618c44f8889efbe39c79bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:07.681104 containerd[1567]: time="2025-10-30T13:03:07.681064038Z" level=error msg="Failed to destroy network for sandbox \"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.682088 containerd[1567]: time="2025-10-30T13:03:07.682052481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vjdtn,Uid:87d5470b-5c41-47d8-8a50-5ec2b3c997cc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.683480 kubelet[2714]: E1030 13:03:07.682321 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.683480 kubelet[2714]: E1030 13:03:07.682368 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.683480 kubelet[2714]: E1030 13:03:07.682384 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vjdtn" Oct 30 13:03:07.683619 kubelet[2714]: E1030 13:03:07.682434 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-vjdtn_calico-system(87d5470b-5c41-47d8-8a50-5ec2b3c997cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-vjdtn_calico-system(87d5470b-5c41-47d8-8a50-5ec2b3c997cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d715d3c0cd21b60b067fe27e9439a7577f402be8a8f4057c5214c636bd98441\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:07.688378 containerd[1567]: time="2025-10-30T13:03:07.688344135Z" level=error msg="Failed to destroy network for sandbox \"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.689371 containerd[1567]: time="2025-10-30T13:03:07.689338057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6475695c54-plmcq,Uid:374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.689581 kubelet[2714]: E1030 13:03:07.689545 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.689724 kubelet[2714]: E1030 13:03:07.689703 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" Oct 30 13:03:07.689907 kubelet[2714]: E1030 13:03:07.689804 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" Oct 30 13:03:07.690030 kubelet[2714]: E1030 13:03:07.689886 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6475695c54-plmcq_calico-system(374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6475695c54-plmcq_calico-system(374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"819178c8e6dfd2f4a8784e88a580abc0ecf4321f6393e8eeebb1c3d6c8a8685a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" podUID="374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed" Oct 30 13:03:07.691831 containerd[1567]: time="2025-10-30T13:03:07.691798023Z" level=error msg="Failed to destroy network for sandbox \"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.693389 containerd[1567]: time="2025-10-30T13:03:07.693353546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-t9zhq,Uid:0e2ac15f-e894-4c79-a6a6-e097374e675b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.693658 kubelet[2714]: E1030 13:03:07.693629 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.693715 kubelet[2714]: E1030 13:03:07.693671 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-t9zhq" Oct 30 13:03:07.693715 kubelet[2714]: E1030 13:03:07.693689 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-t9zhq" Oct 30 13:03:07.693760 kubelet[2714]: E1030 13:03:07.693727 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-t9zhq_kube-system(0e2ac15f-e894-4c79-a6a6-e097374e675b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-t9zhq_kube-system(0e2ac15f-e894-4c79-a6a6-e097374e675b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf534bb7a525ae42209794cca042b6e789f9566d343f9d60bad264129234887a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-t9zhq" podUID="0e2ac15f-e894-4c79-a6a6-e097374e675b" Oct 30 13:03:07.695552 containerd[1567]: time="2025-10-30T13:03:07.695517031Z" level=error msg="Failed to destroy network for sandbox \"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.697035 containerd[1567]: time="2025-10-30T13:03:07.696998034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-889b7db54-smhsz,Uid:15635502-27bd-4151-8ac0-e5341e0aec85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.697175 kubelet[2714]: E1030 13:03:07.697150 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.697212 kubelet[2714]: E1030 13:03:07.697186 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-889b7db54-smhsz" Oct 30 13:03:07.697212 kubelet[2714]: E1030 13:03:07.697207 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-889b7db54-smhsz" Oct 30 13:03:07.697402 kubelet[2714]: E1030 13:03:07.697249 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-889b7db54-smhsz_calico-system(15635502-27bd-4151-8ac0-e5341e0aec85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-889b7db54-smhsz_calico-system(15635502-27bd-4151-8ac0-e5341e0aec85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38f4dde592b999001a50afac8d88f7ba63976979fce52127846496c9b5adf8dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-889b7db54-smhsz" podUID="15635502-27bd-4151-8ac0-e5341e0aec85" Oct 30 13:03:07.698142 containerd[1567]: time="2025-10-30T13:03:07.698042117Z" level=error msg="Failed to destroy network for sandbox \"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.698660 containerd[1567]: time="2025-10-30T13:03:07.698606638Z" level=error msg="Failed to destroy network for sandbox \"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.699039 containerd[1567]: time="2025-10-30T13:03:07.698999479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-9nrw5,Uid:3de0a7ad-5e52-4e0f-a02c-b97bb2482d40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.699356 kubelet[2714]: E1030 13:03:07.699266 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.699409 kubelet[2714]: E1030 13:03:07.699370 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" Oct 30 13:03:07.699409 kubelet[2714]: E1030 13:03:07.699388 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" Oct 30 13:03:07.699530 kubelet[2714]: E1030 13:03:07.699488 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68ff7b8594-9nrw5_calico-apiserver(3de0a7ad-5e52-4e0f-a02c-b97bb2482d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68ff7b8594-9nrw5_calico-apiserver(3de0a7ad-5e52-4e0f-a02c-b97bb2482d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d9ec2babbe62b7aeec6ce7f9d755cc64f683055de2a2c5a79784e98ef1eddb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" podUID="3de0a7ad-5e52-4e0f-a02c-b97bb2482d40" Oct 30 13:03:07.700202 containerd[1567]: time="2025-10-30T13:03:07.700165442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-djsc9,Uid:944adcd1-110a-4978-ab0e-71fcac0fc798,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.700346 kubelet[2714]: E1030 13:03:07.700319 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:07.700408 kubelet[2714]: E1030 13:03:07.700357 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-djsc9" Oct 30 13:03:07.700408 kubelet[2714]: E1030 13:03:07.700374 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-djsc9" Oct 30 13:03:07.700458 kubelet[2714]: E1030 13:03:07.700412 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-djsc9_kube-system(944adcd1-110a-4978-ab0e-71fcac0fc798)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-djsc9_kube-system(944adcd1-110a-4978-ab0e-71fcac0fc798)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09b51f09b6de526095c1d445a88a87fc1bb6941ef178ccd8d05aba0f2bbccc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-djsc9" podUID="944adcd1-110a-4978-ab0e-71fcac0fc798" Oct 30 13:03:08.076322 systemd[1]: Created slice kubepods-besteffort-pod9b490fe4_c844_4da4_8382_9773dc4546ac.slice - libcontainer container kubepods-besteffort-pod9b490fe4_c844_4da4_8382_9773dc4546ac.slice. Oct 30 13:03:08.079891 containerd[1567]: time="2025-10-30T13:03:08.079852808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22knm,Uid:9b490fe4-c844-4da4-8382-9773dc4546ac,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:08.125200 containerd[1567]: time="2025-10-30T13:03:08.125151824Z" level=error msg="Failed to destroy network for sandbox \"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:08.126130 containerd[1567]: time="2025-10-30T13:03:08.126099626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22knm,Uid:9b490fe4-c844-4da4-8382-9773dc4546ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:08.126713 kubelet[2714]: E1030 13:03:08.126307 2714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 13:03:08.126713 kubelet[2714]: E1030 13:03:08.126361 2714 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-22knm" Oct 30 13:03:08.126713 kubelet[2714]: E1030 13:03:08.126382 2714 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-22knm" Oct 30 13:03:08.126813 kubelet[2714]: E1030 13:03:08.126427 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24ea5bc05cb616d65db285d3fb900501b948502cfb97c8e4e8da68e1a0f3c3b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:08.193675 kubelet[2714]: E1030 13:03:08.193641 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:08.195223 containerd[1567]: time="2025-10-30T13:03:08.195170612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 13:03:08.505334 systemd[1]: run-netns-cni\x2d0a7e47a7\x2d999b\x2d1581\x2d5d65\x2d95fe20c3490e.mount: Deactivated successfully. Oct 30 13:03:08.505423 systemd[1]: run-netns-cni\x2d35792753\x2d8f1e\x2d26b0\x2db1c6\x2d04bc3ccfeb17.mount: Deactivated successfully. Oct 30 13:03:08.505466 systemd[1]: run-netns-cni\x2d5523dff3\x2d49f2\x2d4996\x2dfd32\x2d70749675b3cc.mount: Deactivated successfully. Oct 30 13:03:08.505506 systemd[1]: run-netns-cni\x2dbcd8a7a5\x2d4848\x2d4b45\x2d36c9\x2d15c1947ebe30.mount: Deactivated successfully. Oct 30 13:03:10.566138 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Oct 30 13:03:10.625181 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:10.626578 sshd-session[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:10.630965 systemd-logind[1545]: New session 9 of user core. Oct 30 13:03:10.639078 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 13:03:10.725705 sshd[3848]: Connection closed by 10.0.0.1 port 36266 Oct 30 13:03:10.726036 sshd-session[3845]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:10.731044 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Oct 30 13:03:10.731266 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:36266.service: Deactivated successfully. Oct 30 13:03:10.736876 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 13:03:10.738822 systemd-logind[1545]: Removed session 9. Oct 30 13:03:12.115608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889318421.mount: Deactivated successfully. Oct 30 13:03:12.345282 containerd[1567]: time="2025-10-30T13:03:12.345216668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:12.345835 containerd[1567]: time="2025-10-30T13:03:12.345802949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 30 13:03:12.346596 containerd[1567]: time="2025-10-30T13:03:12.346536951Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:12.348311 containerd[1567]: time="2025-10-30T13:03:12.348280193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 13:03:12.348847 containerd[1567]: time="2025-10-30T13:03:12.348681154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.153454542s" Oct 30 13:03:12.348847 containerd[1567]: time="2025-10-30T13:03:12.348713074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 30 13:03:12.377418 containerd[1567]: time="2025-10-30T13:03:12.377327321Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 13:03:12.413391 containerd[1567]: time="2025-10-30T13:03:12.413340900Z" level=info msg="Container ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:03:12.414549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798158079.mount: Deactivated successfully. Oct 30 13:03:12.421617 containerd[1567]: time="2025-10-30T13:03:12.421552113Z" level=info msg="CreateContainer within sandbox \"c5fcb5bf33b4b2309e38d3e687d32b3954a6ce56b7842cc1ed9ad9fe5e591916\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\"" Oct 30 13:03:12.422222 containerd[1567]: time="2025-10-30T13:03:12.422194034Z" level=info msg="StartContainer for \"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\"" Oct 30 13:03:12.423891 containerd[1567]: time="2025-10-30T13:03:12.423859117Z" level=info msg="connecting to shim ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb" address="unix:///run/containerd/s/d0c7ef683d04d25301fc0ffba6cd79c9f11d17aa910750519c1d4e98a0c1e74a" protocol=ttrpc version=3 Oct 30 13:03:12.447093 systemd[1]: Started cri-containerd-ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb.scope - libcontainer container ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb. Oct 30 13:03:12.479842 containerd[1567]: time="2025-10-30T13:03:12.479809528Z" level=info msg="StartContainer for \"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\" returns successfully" Oct 30 13:03:12.598057 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 13:03:12.598150 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 13:03:12.790720 kubelet[2714]: I1030 13:03:12.790676 2714 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94f9m\" (UniqueName: \"kubernetes.io/projected/15635502-27bd-4151-8ac0-e5341e0aec85-kube-api-access-94f9m\") pod \"15635502-27bd-4151-8ac0-e5341e0aec85\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " Oct 30 13:03:12.791102 kubelet[2714]: I1030 13:03:12.790744 2714 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-backend-key-pair\") pod \"15635502-27bd-4151-8ac0-e5341e0aec85\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " Oct 30 13:03:12.791102 kubelet[2714]: I1030 13:03:12.790766 2714 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-ca-bundle\") pod \"15635502-27bd-4151-8ac0-e5341e0aec85\" (UID: \"15635502-27bd-4151-8ac0-e5341e0aec85\") " Oct 30 13:03:12.806966 kubelet[2714]: I1030 13:03:12.806831 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "15635502-27bd-4151-8ac0-e5341e0aec85" (UID: "15635502-27bd-4151-8ac0-e5341e0aec85"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 13:03:12.809665 kubelet[2714]: I1030 13:03:12.809483 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "15635502-27bd-4151-8ac0-e5341e0aec85" (UID: "15635502-27bd-4151-8ac0-e5341e0aec85"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 13:03:12.810217 kubelet[2714]: I1030 13:03:12.810173 2714 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15635502-27bd-4151-8ac0-e5341e0aec85-kube-api-access-94f9m" (OuterVolumeSpecName: "kube-api-access-94f9m") pod "15635502-27bd-4151-8ac0-e5341e0aec85" (UID: "15635502-27bd-4151-8ac0-e5341e0aec85"). InnerVolumeSpecName "kube-api-access-94f9m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 13:03:12.891646 kubelet[2714]: I1030 13:03:12.891603 2714 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94f9m\" (UniqueName: \"kubernetes.io/projected/15635502-27bd-4151-8ac0-e5341e0aec85-kube-api-access-94f9m\") on node \"localhost\" DevicePath \"\"" Oct 30 13:03:12.891646 kubelet[2714]: I1030 13:03:12.891633 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 13:03:12.891646 kubelet[2714]: I1030 13:03:12.891643 2714 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15635502-27bd-4151-8ac0-e5341e0aec85-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 13:03:13.116313 systemd[1]: var-lib-kubelet-pods-15635502\x2d27bd\x2d4151\x2d8ac0\x2de5341e0aec85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d94f9m.mount: Deactivated successfully. Oct 30 13:03:13.116683 systemd[1]: var-lib-kubelet-pods-15635502\x2d27bd\x2d4151\x2d8ac0\x2de5341e0aec85-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 13:03:13.220670 kubelet[2714]: E1030 13:03:13.220630 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:13.223545 systemd[1]: Removed slice kubepods-besteffort-pod15635502_27bd_4151_8ac0_e5341e0aec85.slice - libcontainer container kubepods-besteffort-pod15635502_27bd_4151_8ac0_e5341e0aec85.slice. Oct 30 13:03:13.248361 kubelet[2714]: I1030 13:03:13.248298 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fjx57" podStartSLOduration=1.6865811370000001 podStartE2EDuration="21.243107152s" podCreationTimestamp="2025-10-30 13:02:52 +0000 UTC" firstStartedPulling="2025-10-30 13:02:52.800173672 +0000 UTC m=+24.808110278" lastFinishedPulling="2025-10-30 13:03:12.356699687 +0000 UTC m=+44.364636293" observedRunningTime="2025-10-30 13:03:13.242757911 +0000 UTC m=+45.250694517" watchObservedRunningTime="2025-10-30 13:03:13.243107152 +0000 UTC m=+45.251043798" Oct 30 13:03:13.305549 systemd[1]: Created slice kubepods-besteffort-pod718f0f53_2cc1_483f_b555_8046c49eaff8.slice - libcontainer container kubepods-besteffort-pod718f0f53_2cc1_483f_b555_8046c49eaff8.slice. Oct 30 13:03:13.393118 kubelet[2714]: I1030 13:03:13.392993 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96mj\" (UniqueName: \"kubernetes.io/projected/718f0f53-2cc1-483f-b555-8046c49eaff8-kube-api-access-t96mj\") pod \"whisker-8585bc7456-49r9k\" (UID: \"718f0f53-2cc1-483f-b555-8046c49eaff8\") " pod="calico-system/whisker-8585bc7456-49r9k" Oct 30 13:03:13.393118 kubelet[2714]: I1030 13:03:13.393053 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/718f0f53-2cc1-483f-b555-8046c49eaff8-whisker-backend-key-pair\") pod \"whisker-8585bc7456-49r9k\" (UID: \"718f0f53-2cc1-483f-b555-8046c49eaff8\") " pod="calico-system/whisker-8585bc7456-49r9k" Oct 30 13:03:13.393118 kubelet[2714]: I1030 13:03:13.393073 2714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/718f0f53-2cc1-483f-b555-8046c49eaff8-whisker-ca-bundle\") pod \"whisker-8585bc7456-49r9k\" (UID: \"718f0f53-2cc1-483f-b555-8046c49eaff8\") " pod="calico-system/whisker-8585bc7456-49r9k" Oct 30 13:03:13.618051 containerd[1567]: time="2025-10-30T13:03:13.617992366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8585bc7456-49r9k,Uid:718f0f53-2cc1-483f-b555-8046c49eaff8,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:13.791143 systemd-networkd[1470]: cali1696added5f: Link UP Oct 30 13:03:13.791669 systemd-networkd[1470]: cali1696added5f: Gained carrier Oct 30 13:03:13.803240 containerd[1567]: 2025-10-30 13:03:13.664 [INFO][3930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 13:03:13.803240 containerd[1567]: 2025-10-30 13:03:13.692 [INFO][3930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8585bc7456--49r9k-eth0 whisker-8585bc7456- calico-system 718f0f53-2cc1-483f-b555-8046c49eaff8 988 0 2025-10-30 13:03:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8585bc7456 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8585bc7456-49r9k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1696added5f [] [] }} ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-" Oct 30 13:03:13.803240 containerd[1567]: 2025-10-30 13:03:13.693 [INFO][3930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803240 containerd[1567]: 2025-10-30 13:03:13.747 [INFO][3944] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" HandleID="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Workload="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.747 [INFO][3944] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" HandleID="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Workload="localhost-k8s-whisker--8585bc7456--49r9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000526c00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8585bc7456-49r9k", "timestamp":"2025-10-30 13:03:13.747639565 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.747 [INFO][3944] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.747 [INFO][3944] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.747 [INFO][3944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.758 [INFO][3944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" host="localhost" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.764 [INFO][3944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.768 [INFO][3944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.770 [INFO][3944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.772 [INFO][3944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:13.803420 containerd[1567]: 2025-10-30 13:03:13.772 [INFO][3944] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" host="localhost" Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.773 [INFO][3944] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2 Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.776 [INFO][3944] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" host="localhost" Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.780 [INFO][3944] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" host="localhost" Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.781 [INFO][3944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" host="localhost" Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.781 [INFO][3944] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:13.803623 containerd[1567]: 2025-10-30 13:03:13.781 [INFO][3944] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" HandleID="k8s-pod-network.fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Workload="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803723 containerd[1567]: 2025-10-30 13:03:13.783 [INFO][3930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8585bc7456--49r9k-eth0", GenerateName:"whisker-8585bc7456-", Namespace:"calico-system", SelfLink:"", UID:"718f0f53-2cc1-483f-b555-8046c49eaff8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8585bc7456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8585bc7456-49r9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1696added5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:13.803723 containerd[1567]: 2025-10-30 13:03:13.784 [INFO][3930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803789 containerd[1567]: 2025-10-30 13:03:13.784 [INFO][3930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1696added5f ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803789 containerd[1567]: 2025-10-30 13:03:13.792 [INFO][3930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.803831 containerd[1567]: 2025-10-30 13:03:13.792 [INFO][3930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8585bc7456--49r9k-eth0", GenerateName:"whisker-8585bc7456-", Namespace:"calico-system", SelfLink:"", UID:"718f0f53-2cc1-483f-b555-8046c49eaff8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8585bc7456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2", Pod:"whisker-8585bc7456-49r9k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1696added5f", MAC:"12:e5:7b:c3:d0:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:13.803880 containerd[1567]: 2025-10-30 13:03:13.801 [INFO][3930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" Namespace="calico-system" Pod="whisker-8585bc7456-49r9k" WorkloadEndpoint="localhost-k8s-whisker--8585bc7456--49r9k-eth0" Oct 30 13:03:13.866497 containerd[1567]: time="2025-10-30T13:03:13.866456867Z" level=info msg="connecting to shim fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2" address="unix:///run/containerd/s/d3e0a8820961eb7fe2822b85c5004949b0161bfcd4cb61f0c4276bc218079bcc" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:13.917475 systemd[1]: Started cri-containerd-fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2.scope - libcontainer container fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2. Oct 30 13:03:13.949207 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:14.027769 containerd[1567]: time="2025-10-30T13:03:14.027719352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8585bc7456-49r9k,Uid:718f0f53-2cc1-483f-b555-8046c49eaff8,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd6621b26e3e591da73df81d71159f10a7e250438681433f84daa1869c93f8a2\"" Oct 30 13:03:14.030889 containerd[1567]: time="2025-10-30T13:03:14.030597836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:03:14.075746 kubelet[2714]: I1030 13:03:14.075661 2714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15635502-27bd-4151-8ac0-e5341e0aec85" path="/var/lib/kubelet/pods/15635502-27bd-4151-8ac0-e5341e0aec85/volumes" Oct 30 13:03:14.224766 kubelet[2714]: E1030 13:03:14.224723 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:14.260957 containerd[1567]: time="2025-10-30T13:03:14.260886607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:14.262311 containerd[1567]: time="2025-10-30T13:03:14.262267009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:03:14.262367 containerd[1567]: time="2025-10-30T13:03:14.262311009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:03:14.262672 kubelet[2714]: E1030 13:03:14.262631 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:14.266244 kubelet[2714]: E1030 13:03:14.265961 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:14.271301 kubelet[2714]: E1030 13:03:14.271228 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:14.272634 containerd[1567]: time="2025-10-30T13:03:14.272602544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:03:14.304380 systemd-networkd[1470]: vxlan.calico: Link UP Oct 30 13:03:14.304393 systemd-networkd[1470]: vxlan.calico: Gained carrier Oct 30 13:03:14.379750 containerd[1567]: time="2025-10-30T13:03:14.379634737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\" id:\"29453799a32eb2273833694ce84b2035c7c3bec72c3d7fa89b406a4994cb3851\" pid:4168 exit_status:1 exited_at:{seconds:1761829394 nanos:379331977}" Oct 30 13:03:14.877187 systemd-networkd[1470]: cali1696added5f: Gained IPv6LL Oct 30 13:03:15.225760 kubelet[2714]: E1030 13:03:15.225729 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:15.297471 containerd[1567]: time="2025-10-30T13:03:15.297431790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\" id:\"a7f75966b208ec4f45beb93731127453e23c75d7f24cfacd7175f1bfcac4ce13\" pid:4239 exit_status:1 exited_at:{seconds:1761829395 nanos:297100829}" Oct 30 13:03:15.386770 containerd[1567]: time="2025-10-30T13:03:15.386729310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:15.388134 containerd[1567]: time="2025-10-30T13:03:15.388096032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:03:15.388184 containerd[1567]: time="2025-10-30T13:03:15.388161672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:03:15.388424 kubelet[2714]: E1030 13:03:15.388395 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:15.388477 kubelet[2714]: E1030 13:03:15.388433 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:15.388524 kubelet[2714]: E1030 13:03:15.388504 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:15.388720 kubelet[2714]: E1030 13:03:15.388555 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8585bc7456-49r9k" podUID="718f0f53-2cc1-483f-b555-8046c49eaff8" Oct 30 13:03:15.741455 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:36268.service - OpenSSH per-connection server daemon (10.0.0.1:36268). Oct 30 13:03:15.807562 sshd[4252]: Accepted publickey for core from 10.0.0.1 port 36268 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:15.808686 sshd-session[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:15.814073 systemd-logind[1545]: New session 10 of user core. Oct 30 13:03:15.822079 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 13:03:15.927455 sshd[4255]: Connection closed by 10.0.0.1 port 36268 Oct 30 13:03:15.927380 sshd-session[4252]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:15.936990 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:36268.service: Deactivated successfully. Oct 30 13:03:15.938535 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 13:03:15.939231 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Oct 30 13:03:15.941220 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:36284.service - OpenSSH per-connection server daemon (10.0.0.1:36284). Oct 30 13:03:15.942499 systemd-logind[1545]: Removed session 10. Oct 30 13:03:16.002876 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 36284 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:16.003973 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:16.007483 systemd-logind[1545]: New session 11 of user core. Oct 30 13:03:16.018650 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 13:03:16.092528 systemd-networkd[1470]: vxlan.calico: Gained IPv6LL Oct 30 13:03:16.153539 sshd[4275]: Connection closed by 10.0.0.1 port 36284 Oct 30 13:03:16.154404 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:16.165390 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:36284.service: Deactivated successfully. Oct 30 13:03:16.167760 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 13:03:16.171022 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Oct 30 13:03:16.176328 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). Oct 30 13:03:16.177987 systemd-logind[1545]: Removed session 11. Oct 30 13:03:16.228369 kubelet[2714]: E1030 13:03:16.228309 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8585bc7456-49r9k" podUID="718f0f53-2cc1-483f-b555-8046c49eaff8" Oct 30 13:03:16.232213 sshd[4286]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:16.233700 sshd-session[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:16.241949 systemd-logind[1545]: New session 12 of user core. Oct 30 13:03:16.259160 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 13:03:16.355692 sshd[4292]: Connection closed by 10.0.0.1 port 36288 Oct 30 13:03:16.356153 sshd-session[4286]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:16.361753 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:36288.service: Deactivated successfully. Oct 30 13:03:16.363725 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 13:03:16.365050 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Oct 30 13:03:16.367601 systemd-logind[1545]: Removed session 12. Oct 30 13:03:18.074804 containerd[1567]: time="2025-10-30T13:03:18.074764386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vjdtn,Uid:87d5470b-5c41-47d8-8a50-5ec2b3c997cc,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:18.197011 systemd-networkd[1470]: calid2b66bd94c7: Link UP Oct 30 13:03:18.197215 systemd-networkd[1470]: calid2b66bd94c7: Gained carrier Oct 30 13:03:18.264770 containerd[1567]: 2025-10-30 13:03:18.108 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--vjdtn-eth0 goldmane-7c778bb748- calico-system 87d5470b-5c41-47d8-8a50-5ec2b3c997cc 906 0 2025-10-30 13:02:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-vjdtn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid2b66bd94c7 [] [] }} ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-" Oct 30 13:03:18.264770 containerd[1567]: 2025-10-30 13:03:18.109 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.264770 containerd[1567]: 2025-10-30 13:03:18.142 [INFO][4320] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" HandleID="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Workload="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.142 [INFO][4320] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" HandleID="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Workload="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000343080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-vjdtn", "timestamp":"2025-10-30 13:03:18.142173541 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.142 [INFO][4320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.142 [INFO][4320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.142 [INFO][4320] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.152 [INFO][4320] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" host="localhost" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.159 [INFO][4320] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.163 [INFO][4320] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.165 [INFO][4320] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.167 [INFO][4320] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:18.264974 containerd[1567]: 2025-10-30 13:03:18.167 [INFO][4320] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" host="localhost" Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.168 [INFO][4320] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259 Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.181 [INFO][4320] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" host="localhost" Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.192 [INFO][4320] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" host="localhost" Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.192 [INFO][4320] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" host="localhost" Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.192 [INFO][4320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:18.265682 containerd[1567]: 2025-10-30 13:03:18.192 [INFO][4320] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" HandleID="k8s-pod-network.d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Workload="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.265796 containerd[1567]: 2025-10-30 13:03:18.194 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vjdtn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d5470b-5c41-47d8-8a50-5ec2b3c997cc", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-vjdtn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid2b66bd94c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:18.265796 containerd[1567]: 2025-10-30 13:03:18.194 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.265866 containerd[1567]: 2025-10-30 13:03:18.194 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2b66bd94c7 ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.265866 containerd[1567]: 2025-10-30 13:03:18.196 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.265905 containerd[1567]: 2025-10-30 13:03:18.198 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--vjdtn-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d5470b-5c41-47d8-8a50-5ec2b3c997cc", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259", Pod:"goldmane-7c778bb748-vjdtn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid2b66bd94c7", MAC:"1a:84:a0:a1:52:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:18.266132 containerd[1567]: 2025-10-30 13:03:18.261 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" Namespace="calico-system" Pod="goldmane-7c778bb748-vjdtn" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--vjdtn-eth0" Oct 30 13:03:18.419644 containerd[1567]: time="2025-10-30T13:03:18.419044088Z" level=info msg="connecting to shim d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259" address="unix:///run/containerd/s/0d6bc4bb8f42191e375d44bf6abba7571a667a6fa193ec568d18d0c4c84fc4e0" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:18.442109 systemd[1]: Started cri-containerd-d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259.scope - libcontainer container d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259. Oct 30 13:03:18.453917 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:18.474234 containerd[1567]: time="2025-10-30T13:03:18.474197470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vjdtn,Uid:87d5470b-5c41-47d8-8a50-5ec2b3c997cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d16a8b63c1e5e534c90f5e69f63b2139ca76eb6498b96181fcb50143c75fb259\"" Oct 30 13:03:18.477154 containerd[1567]: time="2025-10-30T13:03:18.477126633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:03:19.321930 containerd[1567]: time="2025-10-30T13:03:19.321855908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:19.322869 containerd[1567]: time="2025-10-30T13:03:19.322816949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:03:19.322974 containerd[1567]: time="2025-10-30T13:03:19.322901749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:19.323164 kubelet[2714]: E1030 13:03:19.323106 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:19.323164 kubelet[2714]: E1030 13:03:19.323161 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:19.323406 kubelet[2714]: E1030 13:03:19.323234 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vjdtn_calico-system(87d5470b-5c41-47d8-8a50-5ec2b3c997cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:19.323406 kubelet[2714]: E1030 13:03:19.323264 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:19.548116 systemd-networkd[1470]: calid2b66bd94c7: Gained IPv6LL Oct 30 13:03:20.075770 containerd[1567]: time="2025-10-30T13:03:20.075729368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-mg9vz,Uid:7109f21d-4639-4dbe-8df7-7e79b520c4a3,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:03:20.176914 systemd-networkd[1470]: caliaec41be32f3: Link UP Oct 30 13:03:20.177768 systemd-networkd[1470]: caliaec41be32f3: Gained carrier Oct 30 13:03:20.192122 containerd[1567]: 2025-10-30 13:03:20.111 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0 calico-apiserver-68ff7b8594- calico-apiserver 7109f21d-4639-4dbe-8df7-7e79b520c4a3 903 0 2025-10-30 13:02:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68ff7b8594 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68ff7b8594-mg9vz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaec41be32f3 [] [] }} ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-" Oct 30 13:03:20.192122 containerd[1567]: 2025-10-30 13:03:20.112 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.192122 containerd[1567]: 2025-10-30 13:03:20.135 [INFO][4411] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" HandleID="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Workload="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.135 [INFO][4411] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" HandleID="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Workload="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68ff7b8594-mg9vz", "timestamp":"2025-10-30 13:03:20.135749346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.136 [INFO][4411] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.136 [INFO][4411] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.136 [INFO][4411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.145 [INFO][4411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" host="localhost" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.150 [INFO][4411] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.154 [INFO][4411] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.156 [INFO][4411] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.158 [INFO][4411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:20.192331 containerd[1567]: 2025-10-30 13:03:20.158 [INFO][4411] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" host="localhost" Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.160 [INFO][4411] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50 Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.166 [INFO][4411] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" host="localhost" Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.171 [INFO][4411] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" host="localhost" Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.171 [INFO][4411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" host="localhost" Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.171 [INFO][4411] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:20.193049 containerd[1567]: 2025-10-30 13:03:20.171 [INFO][4411] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" HandleID="k8s-pod-network.7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Workload="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.193170 containerd[1567]: 2025-10-30 13:03:20.173 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0", GenerateName:"calico-apiserver-68ff7b8594-", Namespace:"calico-apiserver", SelfLink:"", UID:"7109f21d-4639-4dbe-8df7-7e79b520c4a3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff7b8594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68ff7b8594-mg9vz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaec41be32f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:20.193228 containerd[1567]: 2025-10-30 13:03:20.174 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.193228 containerd[1567]: 2025-10-30 13:03:20.174 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaec41be32f3 ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.193228 containerd[1567]: 2025-10-30 13:03:20.177 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.193286 containerd[1567]: 2025-10-30 13:03:20.180 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0", GenerateName:"calico-apiserver-68ff7b8594-", Namespace:"calico-apiserver", SelfLink:"", UID:"7109f21d-4639-4dbe-8df7-7e79b520c4a3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff7b8594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50", Pod:"calico-apiserver-68ff7b8594-mg9vz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaec41be32f3", MAC:"c2:5e:05:7a:53:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:20.193335 containerd[1567]: 2025-10-30 13:03:20.189 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-mg9vz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--mg9vz-eth0" Oct 30 13:03:20.214397 containerd[1567]: time="2025-10-30T13:03:20.214343463Z" level=info msg="connecting to shim 7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50" address="unix:///run/containerd/s/049b0cfc9935f54b01dd090092d5d18321365697f6661ecfb557472841fe1731" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:20.236669 kubelet[2714]: E1030 13:03:20.235871 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:20.238159 systemd[1]: Started cri-containerd-7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50.scope - libcontainer container 7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50. Oct 30 13:03:20.259421 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:20.282505 containerd[1567]: time="2025-10-30T13:03:20.282466890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-mg9vz,Uid:7109f21d-4639-4dbe-8df7-7e79b520c4a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7be492bfaf7d6027b3c0dca436302c56adea23a3c1ef16ee9427e3095216ce50\"" Oct 30 13:03:20.284052 containerd[1567]: time="2025-10-30T13:03:20.284026211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:03:20.530651 containerd[1567]: time="2025-10-30T13:03:20.530481931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:20.531767 containerd[1567]: time="2025-10-30T13:03:20.531666173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:03:20.531767 containerd[1567]: time="2025-10-30T13:03:20.531742493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:20.532054 kubelet[2714]: E1030 13:03:20.531998 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:20.532422 kubelet[2714]: E1030 13:03:20.532059 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:20.532422 kubelet[2714]: E1030 13:03:20.532141 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68ff7b8594-mg9vz_calico-apiserver(7109f21d-4639-4dbe-8df7-7e79b520c4a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:20.532422 kubelet[2714]: E1030 13:03:20.532175 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:21.073380 containerd[1567]: time="2025-10-30T13:03:21.073334577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-9nrw5,Uid:3de0a7ad-5e52-4e0f-a02c-b97bb2482d40,Namespace:calico-apiserver,Attempt:0,}" Oct 30 13:03:21.074702 containerd[1567]: time="2025-10-30T13:03:21.074659698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22knm,Uid:9b490fe4-c844-4da4-8382-9773dc4546ac,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:21.183867 systemd-networkd[1470]: cali905cd7dd20a: Link UP Oct 30 13:03:21.184073 systemd-networkd[1470]: cali905cd7dd20a: Gained carrier Oct 30 13:03:21.201261 containerd[1567]: 2025-10-30 13:03:21.120 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--22knm-eth0 csi-node-driver- calico-system 9b490fe4-c844-4da4-8382-9773dc4546ac 731 0 2025-10-30 13:02:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-22knm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali905cd7dd20a [] [] }} ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-" Oct 30 13:03:21.201261 containerd[1567]: 2025-10-30 13:03:21.120 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.201261 containerd[1567]: 2025-10-30 13:03:21.145 [INFO][4505] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" HandleID="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Workload="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4505] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" HandleID="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Workload="localhost-k8s-csi--node--driver--22knm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-22knm", "timestamp":"2025-10-30 13:03:21.145838003 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.155 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" host="localhost" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.159 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.163 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.165 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.167 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:21.201451 containerd[1567]: 2025-10-30 13:03:21.167 [INFO][4505] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" host="localhost" Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.169 [INFO][4505] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185 Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.172 [INFO][4505] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" host="localhost" Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.177 [INFO][4505] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" host="localhost" Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.178 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" host="localhost" Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.178 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:21.201710 containerd[1567]: 2025-10-30 13:03:21.178 [INFO][4505] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" HandleID="k8s-pod-network.e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Workload="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.201824 containerd[1567]: 2025-10-30 13:03:21.181 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22knm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b490fe4-c844-4da4-8382-9773dc4546ac", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-22knm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali905cd7dd20a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:21.201874 containerd[1567]: 2025-10-30 13:03:21.181 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.201874 containerd[1567]: 2025-10-30 13:03:21.181 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali905cd7dd20a ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.201874 containerd[1567]: 2025-10-30 13:03:21.184 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.202273 containerd[1567]: 2025-10-30 13:03:21.185 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--22knm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b490fe4-c844-4da4-8382-9773dc4546ac", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185", Pod:"csi-node-driver-22knm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali905cd7dd20a", MAC:"06:c3:6b:5e:85:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:21.202364 containerd[1567]: 2025-10-30 13:03:21.198 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" Namespace="calico-system" Pod="csi-node-driver-22knm" WorkloadEndpoint="localhost-k8s-csi--node--driver--22knm-eth0" Oct 30 13:03:21.222987 containerd[1567]: time="2025-10-30T13:03:21.222949673Z" level=info msg="connecting to shim e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185" address="unix:///run/containerd/s/21e94c33725dad716e1ce32822d86030c2059959b0c1f5af3f8808cdd76d297e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:21.239599 kubelet[2714]: E1030 13:03:21.239521 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:21.241216 systemd[1]: Started cri-containerd-e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185.scope - libcontainer container e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185. Oct 30 13:03:21.265083 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:21.285862 containerd[1567]: time="2025-10-30T13:03:21.285827331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-22knm,Uid:9b490fe4-c844-4da4-8382-9773dc4546ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"e219b9b5a0b2fe9c3e366dbf7e04bd678494513294d3c05c50ec395e9c31c185\"" Oct 30 13:03:21.289572 containerd[1567]: time="2025-10-30T13:03:21.289485934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:03:21.295184 systemd-networkd[1470]: cali1d1d17bf7e1: Link UP Oct 30 13:03:21.295719 systemd-networkd[1470]: cali1d1d17bf7e1: Gained carrier Oct 30 13:03:21.308798 containerd[1567]: 2025-10-30 13:03:21.117 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0 calico-apiserver-68ff7b8594- calico-apiserver 3de0a7ad-5e52-4e0f-a02c-b97bb2482d40 905 0 2025-10-30 13:02:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68ff7b8594 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68ff7b8594-9nrw5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d1d17bf7e1 [] [] }} ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-" Oct 30 13:03:21.308798 containerd[1567]: 2025-10-30 13:03:21.117 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.308798 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4507] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" HandleID="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Workload="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4507] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" HandleID="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Workload="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68ff7b8594-9nrw5", "timestamp":"2025-10-30 13:03:21.146116403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.146 [INFO][4507] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.178 [INFO][4507] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.178 [INFO][4507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.256 [INFO][4507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" host="localhost" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.263 [INFO][4507] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.268 [INFO][4507] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.270 [INFO][4507] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.273 [INFO][4507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:21.309035 containerd[1567]: 2025-10-30 13:03:21.273 [INFO][4507] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" host="localhost" Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.274 [INFO][4507] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.279 [INFO][4507] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" host="localhost" Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.286 [INFO][4507] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" host="localhost" Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.286 [INFO][4507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" host="localhost" Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.287 [INFO][4507] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:21.309370 containerd[1567]: 2025-10-30 13:03:21.287 [INFO][4507] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" HandleID="k8s-pod-network.a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Workload="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.309519 containerd[1567]: 2025-10-30 13:03:21.292 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0", GenerateName:"calico-apiserver-68ff7b8594-", Namespace:"calico-apiserver", SelfLink:"", UID:"3de0a7ad-5e52-4e0f-a02c-b97bb2482d40", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff7b8594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68ff7b8594-9nrw5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d1d17bf7e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:21.309698 containerd[1567]: 2025-10-30 13:03:21.292 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.309698 containerd[1567]: 2025-10-30 13:03:21.292 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d1d17bf7e1 ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.309698 containerd[1567]: 2025-10-30 13:03:21.295 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.309770 containerd[1567]: 2025-10-30 13:03:21.296 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0", GenerateName:"calico-apiserver-68ff7b8594-", Namespace:"calico-apiserver", SelfLink:"", UID:"3de0a7ad-5e52-4e0f-a02c-b97bb2482d40", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68ff7b8594", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd", Pod:"calico-apiserver-68ff7b8594-9nrw5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d1d17bf7e1", MAC:"6a:25:88:56:bb:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:21.309826 containerd[1567]: 2025-10-30 13:03:21.305 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" Namespace="calico-apiserver" Pod="calico-apiserver-68ff7b8594-9nrw5" WorkloadEndpoint="localhost-k8s-calico--apiserver--68ff7b8594--9nrw5-eth0" Oct 30 13:03:21.328853 containerd[1567]: time="2025-10-30T13:03:21.328741970Z" level=info msg="connecting to shim a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd" address="unix:///run/containerd/s/b1eaa4f5d33156dd973f9e695256d263a415e1892a74feb3b43d205fa6af152e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:21.360126 systemd[1]: Started cri-containerd-a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd.scope - libcontainer container a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd. Oct 30 13:03:21.366253 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:57236.service - OpenSSH per-connection server daemon (10.0.0.1:57236). Oct 30 13:03:21.374936 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:21.396749 containerd[1567]: time="2025-10-30T13:03:21.396628352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68ff7b8594-9nrw5,Uid:3de0a7ad-5e52-4e0f-a02c-b97bb2482d40,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a269e61cbae744b59f532ffe6a4ae906ec7c1da3422c431bb2a43849b03230bd\"" Oct 30 13:03:21.420840 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 57236 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:21.422378 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:21.426751 systemd-logind[1545]: New session 13 of user core. Oct 30 13:03:21.443094 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 13:03:21.504305 containerd[1567]: time="2025-10-30T13:03:21.504254251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:21.505082 containerd[1567]: time="2025-10-30T13:03:21.505046692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:03:21.505150 containerd[1567]: time="2025-10-30T13:03:21.505130492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:03:21.505339 kubelet[2714]: E1030 13:03:21.505304 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:03:21.505394 kubelet[2714]: E1030 13:03:21.505352 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:03:21.505568 kubelet[2714]: E1030 13:03:21.505535 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:21.506386 containerd[1567]: time="2025-10-30T13:03:21.506352853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:03:21.551489 sshd[4638]: Connection closed by 10.0.0.1 port 57236 Oct 30 13:03:21.551983 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:21.555607 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:57236.service: Deactivated successfully. Oct 30 13:03:21.558176 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 13:03:21.561002 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Oct 30 13:03:21.562913 systemd-logind[1545]: Removed session 13. Oct 30 13:03:21.852131 systemd-networkd[1470]: caliaec41be32f3: Gained IPv6LL Oct 30 13:03:22.072743 kubelet[2714]: E1030 13:03:22.072711 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:22.073257 containerd[1567]: time="2025-10-30T13:03:22.073222607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-t9zhq,Uid:0e2ac15f-e894-4c79-a6a6-e097374e675b,Namespace:kube-system,Attempt:0,}" Oct 30 13:03:22.074778 containerd[1567]: time="2025-10-30T13:03:22.074706168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6475695c54-plmcq,Uid:374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed,Namespace:calico-system,Attempt:0,}" Oct 30 13:03:22.199578 systemd-networkd[1470]: cali40a7430b917: Link UP Oct 30 13:03:22.199867 systemd-networkd[1470]: cali40a7430b917: Gained carrier Oct 30 13:03:22.216347 containerd[1567]: 2025-10-30 13:03:22.126 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--t9zhq-eth0 coredns-66bc5c9577- kube-system 0e2ac15f-e894-4c79-a6a6-e097374e675b 894 0 2025-10-30 13:02:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-t9zhq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40a7430b917 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-" Oct 30 13:03:22.216347 containerd[1567]: 2025-10-30 13:03:22.126 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.216347 containerd[1567]: 2025-10-30 13:03:22.156 [INFO][4680] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" HandleID="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Workload="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.156 [INFO][4680] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" HandleID="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Workload="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c220), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-t9zhq", "timestamp":"2025-10-30 13:03:22.156780439 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.169 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" host="localhost" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.175 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.179 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.181 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.184 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:22.216581 containerd[1567]: 2025-10-30 13:03:22.184 [INFO][4680] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" host="localhost" Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.185 [INFO][4680] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0 Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.189 [INFO][4680] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" host="localhost" Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4680] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" host="localhost" Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" host="localhost" Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:22.216779 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4680] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" HandleID="k8s-pod-network.1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Workload="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.197 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--t9zhq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e2ac15f-e894-4c79-a6a6-e097374e675b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-t9zhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40a7430b917", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.197 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.197 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40a7430b917 ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.200 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.201 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--t9zhq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e2ac15f-e894-4c79-a6a6-e097374e675b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0", Pod:"coredns-66bc5c9577-t9zhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40a7430b917", MAC:"ee:a6:00:6f:05:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:22.217059 containerd[1567]: 2025-10-30 13:03:22.211 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" Namespace="kube-system" Pod="coredns-66bc5c9577-t9zhq" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--t9zhq-eth0" Oct 30 13:03:22.241248 containerd[1567]: time="2025-10-30T13:03:22.241207711Z" level=info msg="connecting to shim 1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0" address="unix:///run/containerd/s/f1ed99c85b32abfcc3e92f4e2e784c28afd868722d0f13df625088d9ed4600ae" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:22.245334 kubelet[2714]: E1030 13:03:22.245280 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:22.272090 systemd[1]: Started cri-containerd-1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0.scope - libcontainer container 1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0. Oct 30 13:03:22.291592 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:22.311588 systemd-networkd[1470]: cali7bbfa61bd25: Link UP Oct 30 13:03:22.312103 systemd-networkd[1470]: cali7bbfa61bd25: Gained carrier Oct 30 13:03:22.326391 containerd[1567]: time="2025-10-30T13:03:22.326165624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-t9zhq,Uid:0e2ac15f-e894-4c79-a6a6-e097374e675b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0\"" Oct 30 13:03:22.328031 kubelet[2714]: E1030 13:03:22.327995 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.128 [INFO][4651] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0 calico-kube-controllers-6475695c54- calico-system 374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed 899 0 2025-10-30 13:02:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6475695c54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6475695c54-plmcq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7bbfa61bd25 [] [] }} ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.129 [INFO][4651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4686] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" HandleID="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Workload="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4686] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" HandleID="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Workload="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6475695c54-plmcq", "timestamp":"2025-10-30 13:03:22.157080959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.157 [INFO][4686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.194 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.268 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.276 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.281 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.284 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.287 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.287 [INFO][4686] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.289 [INFO][4686] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.293 [INFO][4686] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.299 [INFO][4686] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.299 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" host="localhost" Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.299 [INFO][4686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:22.331748 containerd[1567]: 2025-10-30 13:03:22.299 [INFO][4686] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" HandleID="k8s-pod-network.1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Workload="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.304 [INFO][4651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0", GenerateName:"calico-kube-controllers-6475695c54-", Namespace:"calico-system", SelfLink:"", UID:"374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6475695c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6475695c54-plmcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bbfa61bd25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.304 [INFO][4651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.304 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bbfa61bd25 ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.313 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.313 [INFO][4651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0", GenerateName:"calico-kube-controllers-6475695c54-", Namespace:"calico-system", SelfLink:"", UID:"374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6475695c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f", Pod:"calico-kube-controllers-6475695c54-plmcq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bbfa61bd25", MAC:"86:9d:df:0f:89:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:22.332436 containerd[1567]: 2025-10-30 13:03:22.326 [INFO][4651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" Namespace="calico-system" Pod="calico-kube-controllers-6475695c54-plmcq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6475695c54--plmcq-eth0" Oct 30 13:03:22.332848 containerd[1567]: time="2025-10-30T13:03:22.332142269Z" level=info msg="CreateContainer within sandbox \"1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:03:22.346073 containerd[1567]: time="2025-10-30T13:03:22.346033321Z" level=info msg="Container 55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:03:22.355413 containerd[1567]: time="2025-10-30T13:03:22.355165729Z" level=info msg="connecting to shim 1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f" address="unix:///run/containerd/s/5c513d88704456b44b116e1ca9dbe10b3e70a063661c5324569ed767f5ff1389" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:22.359985 containerd[1567]: time="2025-10-30T13:03:22.359945533Z" level=info msg="CreateContainer within sandbox \"1e6990cc5e202d918cf80b1fbb606190db9de3c3a8aba0deab46409a5da151e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e\"" Oct 30 13:03:22.361957 containerd[1567]: time="2025-10-30T13:03:22.361144734Z" level=info msg="StartContainer for \"55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e\"" Oct 30 13:03:22.362859 containerd[1567]: time="2025-10-30T13:03:22.362292495Z" level=info msg="connecting to shim 55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e" address="unix:///run/containerd/s/f1ed99c85b32abfcc3e92f4e2e784c28afd868722d0f13df625088d9ed4600ae" protocol=ttrpc version=3 Oct 30 13:03:22.385103 systemd[1]: Started cri-containerd-1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f.scope - libcontainer container 1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f. Oct 30 13:03:22.386388 systemd[1]: Started cri-containerd-55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e.scope - libcontainer container 55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e. Oct 30 13:03:22.396976 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:22.419442 containerd[1567]: time="2025-10-30T13:03:22.419405984Z" level=info msg="StartContainer for \"55ffe8fa563fa0f68d29fc8eb6ca774fa41da3d7e913ae1f65bd081a3d8a2b5e\" returns successfully" Oct 30 13:03:22.426478 containerd[1567]: time="2025-10-30T13:03:22.426446030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6475695c54-plmcq,Uid:374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f38e1828e3fed79cae0cd3ae631401a974b33c713f2b47d4c65cfa3a9fe287f\"" Oct 30 13:03:22.428474 systemd-networkd[1470]: cali905cd7dd20a: Gained IPv6LL Oct 30 13:03:22.492074 systemd-networkd[1470]: cali1d1d17bf7e1: Gained IPv6LL Oct 30 13:03:23.073120 kubelet[2714]: E1030 13:03:23.073085 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:23.073586 containerd[1567]: time="2025-10-30T13:03:23.073554581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-djsc9,Uid:944adcd1-110a-4978-ab0e-71fcac0fc798,Namespace:kube-system,Attempt:0,}" Oct 30 13:03:23.084323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268623968.mount: Deactivated successfully. Oct 30 13:03:23.175763 systemd-networkd[1470]: cali8117b22d57a: Link UP Oct 30 13:03:23.176071 systemd-networkd[1470]: cali8117b22d57a: Gained carrier Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.114 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--djsc9-eth0 coredns-66bc5c9577- kube-system 944adcd1-110a-4978-ab0e-71fcac0fc798 902 0 2025-10-30 13:02:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-djsc9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8117b22d57a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.114 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.137 [INFO][4859] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" HandleID="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Workload="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.137 [INFO][4859] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" HandleID="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Workload="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-djsc9", "timestamp":"2025-10-30 13:03:23.137777993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.138 [INFO][4859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.138 [INFO][4859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.138 [INFO][4859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.147 [INFO][4859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.152 [INFO][4859] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.156 [INFO][4859] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.158 [INFO][4859] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.160 [INFO][4859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.160 [INFO][4859] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.162 [INFO][4859] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.165 [INFO][4859] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.171 [INFO][4859] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.171 [INFO][4859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" host="localhost" Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.171 [INFO][4859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 13:03:23.188578 containerd[1567]: 2025-10-30 13:03:23.171 [INFO][4859] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" HandleID="k8s-pod-network.6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Workload="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.173 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--djsc9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"944adcd1-110a-4978-ab0e-71fcac0fc798", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-djsc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8117b22d57a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.173 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.173 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8117b22d57a ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.176 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.177 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--djsc9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"944adcd1-110a-4978-ab0e-71fcac0fc798", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 13, 2, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b", Pod:"coredns-66bc5c9577-djsc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8117b22d57a", MAC:"16:95:4c:c1:c8:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 13:03:23.189087 containerd[1567]: 2025-10-30 13:03:23.185 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" Namespace="kube-system" Pod="coredns-66bc5c9577-djsc9" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--djsc9-eth0" Oct 30 13:03:23.214480 containerd[1567]: time="2025-10-30T13:03:23.214429254Z" level=info msg="connecting to shim 6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b" address="unix:///run/containerd/s/a5e23e20b877eea5c5e90ce0a0099be8c52badc6e3e01d02308f8535173b8351" namespace=k8s.io protocol=ttrpc version=3 Oct 30 13:03:23.241171 systemd[1]: Started cri-containerd-6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b.scope - libcontainer container 6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b. Oct 30 13:03:23.250160 kubelet[2714]: E1030 13:03:23.250123 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:23.257896 systemd-resolved[1281]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 13:03:23.273884 kubelet[2714]: I1030 13:03:23.273728 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t9zhq" podStartSLOduration=48.273710102 podStartE2EDuration="48.273710102s" podCreationTimestamp="2025-10-30 13:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:03:23.272482981 +0000 UTC m=+55.280419627" watchObservedRunningTime="2025-10-30 13:03:23.273710102 +0000 UTC m=+55.281646708" Oct 30 13:03:23.288780 containerd[1567]: time="2025-10-30T13:03:23.288734634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-djsc9,Uid:944adcd1-110a-4978-ab0e-71fcac0fc798,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b\"" Oct 30 13:03:23.290704 kubelet[2714]: E1030 13:03:23.290223 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:23.295546 containerd[1567]: time="2025-10-30T13:03:23.295461320Z" level=info msg="CreateContainer within sandbox \"6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 13:03:23.312429 containerd[1567]: time="2025-10-30T13:03:23.312387133Z" level=info msg="Container 29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc: CDI devices from CRI Config.CDIDevices: []" Oct 30 13:03:23.324329 containerd[1567]: time="2025-10-30T13:03:23.324224343Z" level=info msg="CreateContainer within sandbox \"6d830242bf1b72f45ad7c5246492339ff7836544051f32fec5ef05a66832323b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc\"" Oct 30 13:03:23.325350 containerd[1567]: time="2025-10-30T13:03:23.325327184Z" level=info msg="StartContainer for \"29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc\"" Oct 30 13:03:23.327305 containerd[1567]: time="2025-10-30T13:03:23.327224385Z" level=info msg="connecting to shim 29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc" address="unix:///run/containerd/s/a5e23e20b877eea5c5e90ce0a0099be8c52badc6e3e01d02308f8535173b8351" protocol=ttrpc version=3 Oct 30 13:03:23.351110 systemd[1]: Started cri-containerd-29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc.scope - libcontainer container 29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc. Oct 30 13:03:23.378207 containerd[1567]: time="2025-10-30T13:03:23.378168906Z" level=info msg="StartContainer for \"29bbda167a8680ce609860d707a62300096542f2b02571dac43bd26e97aba0dc\" returns successfully" Oct 30 13:03:23.389614 systemd-networkd[1470]: cali40a7430b917: Gained IPv6LL Oct 30 13:03:23.580233 systemd-networkd[1470]: cali7bbfa61bd25: Gained IPv6LL Oct 30 13:03:24.246382 containerd[1567]: time="2025-10-30T13:03:24.246286392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:24.247661 containerd[1567]: time="2025-10-30T13:03:24.247622793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:03:24.247737 containerd[1567]: time="2025-10-30T13:03:24.247657433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:24.247960 kubelet[2714]: E1030 13:03:24.247894 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:24.248482 kubelet[2714]: E1030 13:03:24.247972 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:24.248482 kubelet[2714]: E1030 13:03:24.248142 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68ff7b8594-9nrw5_calico-apiserver(3de0a7ad-5e52-4e0f-a02c-b97bb2482d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:24.248482 kubelet[2714]: E1030 13:03:24.248190 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" podUID="3de0a7ad-5e52-4e0f-a02c-b97bb2482d40" Oct 30 13:03:24.248959 containerd[1567]: time="2025-10-30T13:03:24.248349433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:03:24.255215 kubelet[2714]: E1030 13:03:24.255165 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:24.255765 kubelet[2714]: E1030 13:03:24.255704 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:24.257128 kubelet[2714]: E1030 13:03:24.257062 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" podUID="3de0a7ad-5e52-4e0f-a02c-b97bb2482d40" Oct 30 13:03:24.278769 kubelet[2714]: I1030 13:03:24.277276 2714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-djsc9" podStartSLOduration=49.277258895 podStartE2EDuration="49.277258895s" podCreationTimestamp="2025-10-30 13:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 13:03:24.277229695 +0000 UTC m=+56.285166261" watchObservedRunningTime="2025-10-30 13:03:24.277258895 +0000 UTC m=+56.285195501" Oct 30 13:03:24.479272 containerd[1567]: time="2025-10-30T13:03:24.479162647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:24.480777 containerd[1567]: time="2025-10-30T13:03:24.480685168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:03:24.480777 containerd[1567]: time="2025-10-30T13:03:24.480725328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:03:24.481053 kubelet[2714]: E1030 13:03:24.480983 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:03:24.481053 kubelet[2714]: E1030 13:03:24.481047 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:03:24.481325 kubelet[2714]: E1030 13:03:24.481289 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:24.481493 containerd[1567]: time="2025-10-30T13:03:24.481459529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:03:24.481699 kubelet[2714]: E1030 13:03:24.481471 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:24.796119 systemd-networkd[1470]: cali8117b22d57a: Gained IPv6LL Oct 30 13:03:25.261743 kubelet[2714]: E1030 13:03:25.261700 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:25.263054 kubelet[2714]: E1030 13:03:25.263022 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:25.264947 kubelet[2714]: E1030 13:03:25.264886 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:26.264014 kubelet[2714]: E1030 13:03:26.263909 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:26.550317 containerd[1567]: time="2025-10-30T13:03:26.550207631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:26.551170 containerd[1567]: time="2025-10-30T13:03:26.551133111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:03:26.551236 containerd[1567]: time="2025-10-30T13:03:26.551219271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:03:26.551461 kubelet[2714]: E1030 13:03:26.551422 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:03:26.551528 kubelet[2714]: E1030 13:03:26.551474 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:03:26.551595 kubelet[2714]: E1030 13:03:26.551574 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6475695c54-plmcq_calico-system(374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:26.551909 kubelet[2714]: E1030 13:03:26.551615 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" podUID="374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed" Oct 30 13:03:26.570991 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:57242.service - OpenSSH per-connection server daemon (10.0.0.1:57242). Oct 30 13:03:26.648142 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 57242 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:26.651411 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:26.655421 systemd-logind[1545]: New session 14 of user core. Oct 30 13:03:26.665096 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 13:03:26.775102 sshd[4972]: Connection closed by 10.0.0.1 port 57242 Oct 30 13:03:26.775413 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:26.779144 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Oct 30 13:03:26.779358 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:57242.service: Deactivated successfully. Oct 30 13:03:26.781072 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 13:03:26.783647 systemd-logind[1545]: Removed session 14. Oct 30 13:03:27.073110 containerd[1567]: time="2025-10-30T13:03:27.072919414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:03:27.266902 kubelet[2714]: E1030 13:03:27.266784 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" podUID="374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed" Oct 30 13:03:27.272824 containerd[1567]: time="2025-10-30T13:03:27.272788538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:27.273660 containerd[1567]: time="2025-10-30T13:03:27.273621539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:03:27.273819 containerd[1567]: time="2025-10-30T13:03:27.273684779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:03:27.274325 kubelet[2714]: E1030 13:03:27.273812 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:27.274325 kubelet[2714]: E1030 13:03:27.273874 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:27.274325 kubelet[2714]: E1030 13:03:27.273944 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:27.275699 containerd[1567]: time="2025-10-30T13:03:27.275668220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:03:27.957437 containerd[1567]: time="2025-10-30T13:03:27.957302963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:27.958992 containerd[1567]: time="2025-10-30T13:03:27.958940484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:03:27.959071 containerd[1567]: time="2025-10-30T13:03:27.958982084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:03:27.959220 kubelet[2714]: E1030 13:03:27.959170 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:27.959277 kubelet[2714]: E1030 13:03:27.959229 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:27.959340 kubelet[2714]: E1030 13:03:27.959306 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:27.959396 kubelet[2714]: E1030 13:03:27.959362 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8585bc7456-49r9k" podUID="718f0f53-2cc1-483f-b555-8046c49eaff8" Oct 30 13:03:31.787325 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:40910.service - OpenSSH per-connection server daemon (10.0.0.1:40910). Oct 30 13:03:31.853557 sshd[4991]: Accepted publickey for core from 10.0.0.1 port 40910 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:31.854615 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:31.859387 systemd-logind[1545]: New session 15 of user core. Oct 30 13:03:31.866118 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 13:03:31.968767 sshd[4994]: Connection closed by 10.0.0.1 port 40910 Oct 30 13:03:31.969097 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:31.974403 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:40910.service: Deactivated successfully. Oct 30 13:03:31.976328 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 13:03:31.977477 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Oct 30 13:03:31.979024 systemd-logind[1545]: Removed session 15. Oct 30 13:03:32.074849 containerd[1567]: time="2025-10-30T13:03:32.073720022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:03:32.289935 containerd[1567]: time="2025-10-30T13:03:32.289892239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:32.294062 containerd[1567]: time="2025-10-30T13:03:32.294012001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:03:32.294161 containerd[1567]: time="2025-10-30T13:03:32.294087121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:32.294793 kubelet[2714]: E1030 13:03:32.294357 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:32.294793 kubelet[2714]: E1030 13:03:32.294475 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:32.294793 kubelet[2714]: E1030 13:03:32.294616 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vjdtn_calico-system(87d5470b-5c41-47d8-8a50-5ec2b3c997cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:32.294793 kubelet[2714]: E1030 13:03:32.294767 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:35.072595 containerd[1567]: time="2025-10-30T13:03:35.072450423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:03:35.291713 containerd[1567]: time="2025-10-30T13:03:35.291670444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:35.292682 containerd[1567]: time="2025-10-30T13:03:35.292602604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:03:35.292682 containerd[1567]: time="2025-10-30T13:03:35.292654524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:35.292816 kubelet[2714]: E1030 13:03:35.292778 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:35.293179 kubelet[2714]: E1030 13:03:35.292821 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:35.293179 kubelet[2714]: E1030 13:03:35.292896 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68ff7b8594-9nrw5_calico-apiserver(3de0a7ad-5e52-4e0f-a02c-b97bb2482d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:35.293179 kubelet[2714]: E1030 13:03:35.292943 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" podUID="3de0a7ad-5e52-4e0f-a02c-b97bb2482d40" Oct 30 13:03:36.072327 containerd[1567]: time="2025-10-30T13:03:36.072217812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 13:03:36.276954 containerd[1567]: time="2025-10-30T13:03:36.276864642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:36.278230 containerd[1567]: time="2025-10-30T13:03:36.278126161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 13:03:36.278230 containerd[1567]: time="2025-10-30T13:03:36.278163121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:36.278393 kubelet[2714]: E1030 13:03:36.278357 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:36.278443 kubelet[2714]: E1030 13:03:36.278403 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 13:03:36.278517 kubelet[2714]: E1030 13:03:36.278493 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68ff7b8594-mg9vz_calico-apiserver(7109f21d-4639-4dbe-8df7-7e79b520c4a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:36.278561 kubelet[2714]: E1030 13:03:36.278532 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:36.980505 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:40920.service - OpenSSH per-connection server daemon (10.0.0.1:40920). Oct 30 13:03:37.035979 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 40920 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:37.037372 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:37.041403 systemd-logind[1545]: New session 16 of user core. Oct 30 13:03:37.053101 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 13:03:37.182815 sshd[5022]: Connection closed by 10.0.0.1 port 40920 Oct 30 13:03:37.183287 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:37.194182 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:40920.service: Deactivated successfully. Oct 30 13:03:37.196192 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 13:03:37.197126 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Oct 30 13:03:37.200703 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:40924.service - OpenSSH per-connection server daemon (10.0.0.1:40924). Oct 30 13:03:37.201676 systemd-logind[1545]: Removed session 16. Oct 30 13:03:37.260266 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 40924 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:37.261514 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:37.266402 systemd-logind[1545]: New session 17 of user core. Oct 30 13:03:37.276117 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 13:03:37.439457 sshd[5038]: Connection closed by 10.0.0.1 port 40924 Oct 30 13:03:37.439969 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:37.451414 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:40924.service: Deactivated successfully. Oct 30 13:03:37.454405 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 13:03:37.455130 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Oct 30 13:03:37.457883 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:40934.service - OpenSSH per-connection server daemon (10.0.0.1:40934). Oct 30 13:03:37.458748 systemd-logind[1545]: Removed session 17. Oct 30 13:03:37.517824 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 40934 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:37.519070 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:37.523239 systemd-logind[1545]: New session 18 of user core. Oct 30 13:03:37.532096 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 13:03:38.074967 containerd[1567]: time="2025-10-30T13:03:38.074747425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 13:03:38.089486 sshd[5052]: Connection closed by 10.0.0.1 port 40934 Oct 30 13:03:38.090134 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:38.100982 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:40934.service: Deactivated successfully. Oct 30 13:03:38.104386 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 13:03:38.106230 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Oct 30 13:03:38.108876 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:40948.service - OpenSSH per-connection server daemon (10.0.0.1:40948). Oct 30 13:03:38.109731 systemd-logind[1545]: Removed session 18. Oct 30 13:03:38.177250 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 40948 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:38.178842 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:38.182664 systemd-logind[1545]: New session 19 of user core. Oct 30 13:03:38.197337 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 13:03:38.409130 sshd[5074]: Connection closed by 10.0.0.1 port 40948 Oct 30 13:03:38.410108 sshd-session[5070]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:38.419362 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:40948.service: Deactivated successfully. Oct 30 13:03:38.422838 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 13:03:38.425165 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Oct 30 13:03:38.427718 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:40962.service - OpenSSH per-connection server daemon (10.0.0.1:40962). Oct 30 13:03:38.428777 systemd-logind[1545]: Removed session 19. Oct 30 13:03:38.498915 sshd[5086]: Accepted publickey for core from 10.0.0.1 port 40962 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:38.500493 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:38.504768 systemd-logind[1545]: New session 20 of user core. Oct 30 13:03:38.512079 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 13:03:38.616047 sshd[5089]: Connection closed by 10.0.0.1 port 40962 Oct 30 13:03:38.615493 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:38.619521 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:40962.service: Deactivated successfully. Oct 30 13:03:38.621360 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 13:03:38.622020 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Oct 30 13:03:38.622917 systemd-logind[1545]: Removed session 20. Oct 30 13:03:39.033813 containerd[1567]: time="2025-10-30T13:03:39.033717826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:39.034822 containerd[1567]: time="2025-10-30T13:03:39.034747666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 13:03:39.034822 containerd[1567]: time="2025-10-30T13:03:39.034793906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 13:03:39.035014 kubelet[2714]: E1030 13:03:39.034966 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:03:39.035433 kubelet[2714]: E1030 13:03:39.035022 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 13:03:39.035433 kubelet[2714]: E1030 13:03:39.035100 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:39.035856 containerd[1567]: time="2025-10-30T13:03:39.035834145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 13:03:39.263206 containerd[1567]: time="2025-10-30T13:03:39.263149253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:39.264188 containerd[1567]: time="2025-10-30T13:03:39.264150093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 13:03:39.264248 containerd[1567]: time="2025-10-30T13:03:39.264215333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 13:03:39.264412 kubelet[2714]: E1030 13:03:39.264375 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:03:39.264468 kubelet[2714]: E1030 13:03:39.264423 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 13:03:39.264545 kubelet[2714]: E1030 13:03:39.264522 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-22knm_calico-system(9b490fe4-c844-4da4-8382-9773dc4546ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:39.264915 kubelet[2714]: E1030 13:03:39.264887 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:40.075999 kubelet[2714]: E1030 13:03:40.075768 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8585bc7456-49r9k" podUID="718f0f53-2cc1-483f-b555-8046c49eaff8" Oct 30 13:03:40.082414 containerd[1567]: time="2025-10-30T13:03:40.082369843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 13:03:40.726254 containerd[1567]: time="2025-10-30T13:03:40.726111229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:40.727322 containerd[1567]: time="2025-10-30T13:03:40.727276669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 13:03:40.727322 containerd[1567]: time="2025-10-30T13:03:40.727316469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 13:03:40.727670 kubelet[2714]: E1030 13:03:40.727572 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:03:40.727670 kubelet[2714]: E1030 13:03:40.727661 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 13:03:40.727830 kubelet[2714]: E1030 13:03:40.727747 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6475695c54-plmcq_calico-system(374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:40.727830 kubelet[2714]: E1030 13:03:40.727784 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" podUID="374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed" Oct 30 13:03:43.627085 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:34720.service - OpenSSH per-connection server daemon (10.0.0.1:34720). Oct 30 13:03:43.694912 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 34720 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:43.696769 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:43.700986 systemd-logind[1545]: New session 21 of user core. Oct 30 13:03:43.708054 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 13:03:43.795783 sshd[5110]: Connection closed by 10.0.0.1 port 34720 Oct 30 13:03:43.795296 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:43.799662 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Oct 30 13:03:43.799755 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:34720.service: Deactivated successfully. Oct 30 13:03:43.801486 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 13:03:43.803358 systemd-logind[1545]: Removed session 21. Oct 30 13:03:44.074028 kubelet[2714]: E1030 13:03:44.073588 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:45.298399 containerd[1567]: time="2025-10-30T13:03:45.298349749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab7116561c25cbdb0c8173a7807c81d57876be7ee229cd7e871e5e61aea7d8cb\" id:\"4d22d01db3ce0a8df8357f22926150c945c06bc017a459b949b0f50c3b45fc90\" pid:5138 exited_at:{seconds:1761829425 nanos:297540749}" Oct 30 13:03:45.301706 kubelet[2714]: E1030 13:03:45.301645 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:48.073904 kubelet[2714]: E1030 13:03:48.073858 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-9nrw5" podUID="3de0a7ad-5e52-4e0f-a02c-b97bb2482d40" Oct 30 13:03:48.818176 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:34728.service - OpenSSH per-connection server daemon (10.0.0.1:34728). Oct 30 13:03:48.905571 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 34728 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:48.907425 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:48.911591 systemd-logind[1545]: New session 22 of user core. Oct 30 13:03:48.921078 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 13:03:49.047739 sshd[5155]: Connection closed by 10.0.0.1 port 34728 Oct 30 13:03:49.048080 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:49.052085 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:34728.service: Deactivated successfully. Oct 30 13:03:49.054624 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 13:03:49.055512 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Oct 30 13:03:49.056693 systemd-logind[1545]: Removed session 22. Oct 30 13:03:51.072547 containerd[1567]: time="2025-10-30T13:03:51.072511703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 13:03:51.322311 containerd[1567]: time="2025-10-30T13:03:51.322165110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:51.323428 containerd[1567]: time="2025-10-30T13:03:51.323178509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 13:03:51.323428 containerd[1567]: time="2025-10-30T13:03:51.323234229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 13:03:51.323514 kubelet[2714]: E1030 13:03:51.323332 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:51.323514 kubelet[2714]: E1030 13:03:51.323390 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 13:03:51.323514 kubelet[2714]: E1030 13:03:51.323494 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:51.324995 containerd[1567]: time="2025-10-30T13:03:51.324941589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 13:03:51.538631 containerd[1567]: time="2025-10-30T13:03:51.538553286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:51.539492 containerd[1567]: time="2025-10-30T13:03:51.539458526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 13:03:51.539589 containerd[1567]: time="2025-10-30T13:03:51.539479046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 13:03:51.539720 kubelet[2714]: E1030 13:03:51.539675 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:51.539769 kubelet[2714]: E1030 13:03:51.539732 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 13:03:51.539840 kubelet[2714]: E1030 13:03:51.539820 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8585bc7456-49r9k_calico-system(718f0f53-2cc1-483f-b555-8046c49eaff8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:51.539893 kubelet[2714]: E1030 13:03:51.539864 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8585bc7456-49r9k" podUID="718f0f53-2cc1-483f-b555-8046c49eaff8" Oct 30 13:03:52.071893 kubelet[2714]: E1030 13:03:52.071810 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:52.072745 kubelet[2714]: E1030 13:03:52.072703 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68ff7b8594-mg9vz" podUID="7109f21d-4639-4dbe-8df7-7e79b520c4a3" Oct 30 13:03:53.072329 kubelet[2714]: E1030 13:03:53.072282 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6475695c54-plmcq" podUID="374229d4-b4e3-42c9-a2fb-6c8ea2dc49ed" Oct 30 13:03:53.073879 kubelet[2714]: E1030 13:03:53.073827 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-22knm" podUID="9b490fe4-c844-4da4-8382-9773dc4546ac" Oct 30 13:03:54.060568 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:42560.service - OpenSSH per-connection server daemon (10.0.0.1:42560). Oct 30 13:03:54.072367 kubelet[2714]: E1030 13:03:54.072008 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:54.121593 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 42560 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:54.123087 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:54.127064 systemd-logind[1545]: New session 23 of user core. Oct 30 13:03:54.142085 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 13:03:54.225963 sshd[5174]: Connection closed by 10.0.0.1 port 42560 Oct 30 13:03:54.225785 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:54.229765 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:42560.service: Deactivated successfully. Oct 30 13:03:54.231570 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 13:03:54.232397 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Oct 30 13:03:54.233199 systemd-logind[1545]: Removed session 23. Oct 30 13:03:57.071663 kubelet[2714]: E1030 13:03:57.071618 2714 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 13:03:59.072432 containerd[1567]: time="2025-10-30T13:03:59.072350647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 13:03:59.252213 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Oct 30 13:03:59.325431 containerd[1567]: time="2025-10-30T13:03:59.325313386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 13:03:59.328058 containerd[1567]: time="2025-10-30T13:03:59.328023306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 13:03:59.328810 containerd[1567]: time="2025-10-30T13:03:59.328096306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 13:03:59.328869 kubelet[2714]: E1030 13:03:59.328267 2714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:59.328869 kubelet[2714]: E1030 13:03:59.328313 2714 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 13:03:59.328869 kubelet[2714]: E1030 13:03:59.328405 2714 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vjdtn_calico-system(87d5470b-5c41-47d8-8a50-5ec2b3c997cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 13:03:59.328869 kubelet[2714]: E1030 13:03:59.328435 2714 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vjdtn" podUID="87d5470b-5c41-47d8-8a50-5ec2b3c997cc" Oct 30 13:03:59.332148 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:rXe27qMUmzSxngOipoYn2QbqTxguJSpLRRgoLbzr9FA Oct 30 13:03:59.334689 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 13:03:59.339913 systemd-logind[1545]: New session 24 of user core. Oct 30 13:03:59.348087 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 13:03:59.452702 sshd[5196]: Connection closed by 10.0.0.1 port 53862 Oct 30 13:03:59.455565 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Oct 30 13:03:59.460659 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:53862.service: Deactivated successfully. Oct 30 13:03:59.462804 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 13:03:59.464672 systemd-logind[1545]: Session 24 logged out. Waiting for processes to exit. Oct 30 13:03:59.466136 systemd-logind[1545]: Removed session 24.