Oct 27 07:54:51.358390 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 27 07:54:51.358413 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Mon Oct 27 06:23:59 -00 2025 Oct 27 07:54:51.358421 kernel: KASLR enabled Oct 27 07:54:51.358427 kernel: efi: EFI v2.7 by EDK II Oct 27 07:54:51.358433 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 27 07:54:51.358439 kernel: random: crng init done Oct 27 07:54:51.358446 kernel: secureboot: Secure boot disabled Oct 27 07:54:51.358452 kernel: ACPI: Early table checksum verification disabled Oct 27 07:54:51.358460 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 27 07:54:51.358466 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 27 07:54:51.358472 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358478 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358484 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358490 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358498 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358505 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358511 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358517 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358524 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 07:54:51.358530 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 27 07:54:51.358536 kernel: ACPI: Use ACPI SPCR as default console: No Oct 27 07:54:51.358543 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 07:54:51.358550 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 27 07:54:51.358557 kernel: Zone ranges: Oct 27 07:54:51.358563 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 07:54:51.358569 kernel: DMA32 empty Oct 27 07:54:51.358575 kernel: Normal empty Oct 27 07:54:51.358582 kernel: Device empty Oct 27 07:54:51.358588 kernel: Movable zone start for each node Oct 27 07:54:51.358594 kernel: Early memory node ranges Oct 27 07:54:51.358600 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 27 07:54:51.358607 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 27 07:54:51.358613 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 27 07:54:51.358619 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 27 07:54:51.358627 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 27 07:54:51.358633 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 27 07:54:51.358640 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 27 07:54:51.358646 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 27 07:54:51.358652 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 27 07:54:51.358658 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 27 07:54:51.358675 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 27 07:54:51.358683 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 27 07:54:51.358690 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 27 07:54:51.358696 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 07:54:51.358703 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 27 07:54:51.358710 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 27 07:54:51.358717 kernel: psci: probing for conduit method from ACPI. Oct 27 07:54:51.358723 kernel: psci: PSCIv1.1 detected in firmware. Oct 27 07:54:51.358731 kernel: psci: Using standard PSCI v0.2 function IDs Oct 27 07:54:51.358738 kernel: psci: Trusted OS migration not required Oct 27 07:54:51.358745 kernel: psci: SMC Calling Convention v1.1 Oct 27 07:54:51.358751 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 27 07:54:51.358758 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 27 07:54:51.358765 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 27 07:54:51.358772 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 27 07:54:51.358778 kernel: Detected PIPT I-cache on CPU0 Oct 27 07:54:51.358785 kernel: CPU features: detected: GIC system register CPU interface Oct 27 07:54:51.358792 kernel: CPU features: detected: Spectre-v4 Oct 27 07:54:51.358799 kernel: CPU features: detected: Spectre-BHB Oct 27 07:54:51.358807 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 27 07:54:51.358813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 27 07:54:51.358820 kernel: CPU features: detected: ARM erratum 1418040 Oct 27 07:54:51.358827 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 27 07:54:51.358834 kernel: alternatives: applying boot alternatives Oct 27 07:54:51.358841 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bee5c97bda7b98c2562b3493f0eda24483b61c5bb4f20dc75ba50cb0f724070a Oct 27 07:54:51.358849 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 07:54:51.358856 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 07:54:51.358862 kernel: Fallback order for Node 0: 0 Oct 27 07:54:51.358869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 27 07:54:51.358877 kernel: Policy zone: DMA Oct 27 07:54:51.358884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 07:54:51.358891 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 27 07:54:51.358897 kernel: software IO TLB: area num 4. Oct 27 07:54:51.358904 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 27 07:54:51.358911 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 27 07:54:51.358918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 07:54:51.358924 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 07:54:51.358932 kernel: rcu: RCU event tracing is enabled. Oct 27 07:54:51.358939 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 07:54:51.358946 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 07:54:51.358954 kernel: Tracing variant of Tasks RCU enabled. Oct 27 07:54:51.358961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 07:54:51.358967 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 07:54:51.358974 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 07:54:51.358981 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 07:54:51.358988 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 27 07:54:51.358995 kernel: GICv3: 256 SPIs implemented Oct 27 07:54:51.359001 kernel: GICv3: 0 Extended SPIs implemented Oct 27 07:54:51.359008 kernel: Root IRQ handler: gic_handle_irq Oct 27 07:54:51.359015 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 27 07:54:51.359021 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 27 07:54:51.359030 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 27 07:54:51.359036 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 27 07:54:51.359043 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 27 07:54:51.359050 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 27 07:54:51.359057 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 27 07:54:51.359064 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 27 07:54:51.359071 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 07:54:51.359077 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 07:54:51.359084 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 27 07:54:51.359091 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 27 07:54:51.359107 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 27 07:54:51.359116 kernel: arm-pv: using stolen time PV Oct 27 07:54:51.359124 kernel: Console: colour dummy device 80x25 Oct 27 07:54:51.359131 kernel: ACPI: Core revision 20240827 Oct 27 07:54:51.359138 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 27 07:54:51.359146 kernel: pid_max: default: 32768 minimum: 301 Oct 27 07:54:51.359153 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 07:54:51.359160 kernel: landlock: Up and running. Oct 27 07:54:51.359167 kernel: SELinux: Initializing. Oct 27 07:54:51.359175 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 07:54:51.359182 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 07:54:51.359189 kernel: rcu: Hierarchical SRCU implementation. Oct 27 07:54:51.359196 kernel: rcu: Max phase no-delay instances is 400. Oct 27 07:54:51.359204 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 07:54:51.359211 kernel: Remapping and enabling EFI services. Oct 27 07:54:51.359218 kernel: smp: Bringing up secondary CPUs ... Oct 27 07:54:51.359226 kernel: Detected PIPT I-cache on CPU1 Oct 27 07:54:51.359238 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 27 07:54:51.359247 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 27 07:54:51.359254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 07:54:51.359262 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 27 07:54:51.359269 kernel: Detected PIPT I-cache on CPU2 Oct 27 07:54:51.359277 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 27 07:54:51.359285 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 27 07:54:51.359293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 07:54:51.359300 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 27 07:54:51.359307 kernel: Detected PIPT I-cache on CPU3 Oct 27 07:54:51.359315 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 27 07:54:51.359323 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 27 07:54:51.359331 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 07:54:51.359360 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 27 07:54:51.359369 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 07:54:51.359376 kernel: SMP: Total of 4 processors activated. Oct 27 07:54:51.359384 kernel: CPU: All CPU(s) started at EL1 Oct 27 07:54:51.359392 kernel: CPU features: detected: 32-bit EL0 Support Oct 27 07:54:51.359400 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 27 07:54:51.359408 kernel: CPU features: detected: Common not Private translations Oct 27 07:54:51.359417 kernel: CPU features: detected: CRC32 instructions Oct 27 07:54:51.359425 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 27 07:54:51.359432 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 27 07:54:51.359440 kernel: CPU features: detected: LSE atomic instructions Oct 27 07:54:51.359447 kernel: CPU features: detected: Privileged Access Never Oct 27 07:54:51.359455 kernel: CPU features: detected: RAS Extension Support Oct 27 07:54:51.359462 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 27 07:54:51.359470 kernel: alternatives: applying system-wide alternatives Oct 27 07:54:51.359479 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 27 07:54:51.359487 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 27 07:54:51.359495 kernel: devtmpfs: initialized Oct 27 07:54:51.359503 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 07:54:51.359510 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 07:54:51.359518 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 27 07:54:51.359526 kernel: 0 pages in range for non-PLT usage Oct 27 07:54:51.359534 kernel: 515056 pages in range for PLT usage Oct 27 07:54:51.359543 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 07:54:51.359550 kernel: SMBIOS 3.0.0 present. Oct 27 07:54:51.359557 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 27 07:54:51.359565 kernel: DMI: Memory slots populated: 1/1 Oct 27 07:54:51.359573 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 07:54:51.359580 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 27 07:54:51.359589 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 27 07:54:51.359598 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 27 07:54:51.359605 kernel: audit: initializing netlink subsys (disabled) Oct 27 07:54:51.359613 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Oct 27 07:54:51.359620 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 07:54:51.359628 kernel: cpuidle: using governor menu Oct 27 07:54:51.359636 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 27 07:54:51.359644 kernel: ASID allocator initialised with 32768 entries Oct 27 07:54:51.359652 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 07:54:51.359659 kernel: Serial: AMBA PL011 UART driver Oct 27 07:54:51.359671 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 07:54:51.359680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 07:54:51.359687 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 27 07:54:51.359695 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 27 07:54:51.359703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 07:54:51.359712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 07:54:51.359720 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 27 07:54:51.359727 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 27 07:54:51.359735 kernel: ACPI: Added _OSI(Module Device) Oct 27 07:54:51.359743 kernel: ACPI: Added _OSI(Processor Device) Oct 27 07:54:51.359751 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 07:54:51.359758 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 07:54:51.359767 kernel: ACPI: Interpreter enabled Oct 27 07:54:51.359775 kernel: ACPI: Using GIC for interrupt routing Oct 27 07:54:51.359782 kernel: ACPI: MCFG table detected, 1 entries Oct 27 07:54:51.359790 kernel: ACPI: CPU0 has been hot-added Oct 27 07:54:51.359797 kernel: ACPI: CPU1 has been hot-added Oct 27 07:54:51.359804 kernel: ACPI: CPU2 has been hot-added Oct 27 07:54:51.359812 kernel: ACPI: CPU3 has been hot-added Oct 27 07:54:51.359830 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 27 07:54:51.359839 kernel: printk: legacy console [ttyAMA0] enabled Oct 27 07:54:51.359847 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 07:54:51.360001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 07:54:51.360088 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 27 07:54:51.360168 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 27 07:54:51.360350 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 27 07:54:51.360459 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 27 07:54:51.360470 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 27 07:54:51.360478 kernel: PCI host bridge to bus 0000:00 Oct 27 07:54:51.360566 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 27 07:54:51.360638 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 27 07:54:51.360728 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 27 07:54:51.360801 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 07:54:51.360906 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 27 07:54:51.361018 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 27 07:54:51.361116 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 27 07:54:51.361197 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 27 07:54:51.361280 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 07:54:51.361370 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 27 07:54:51.361453 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 27 07:54:51.361533 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 27 07:54:51.361606 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 27 07:54:51.361683 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 27 07:54:51.361761 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 27 07:54:51.361770 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 27 07:54:51.361778 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 27 07:54:51.361788 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 27 07:54:51.361800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 27 07:54:51.361808 kernel: iommu: Default domain type: Translated Oct 27 07:54:51.361818 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 27 07:54:51.361825 kernel: efivars: Registered efivars operations Oct 27 07:54:51.361833 kernel: vgaarb: loaded Oct 27 07:54:51.361840 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 27 07:54:51.361848 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 07:54:51.361855 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 07:54:51.361863 kernel: pnp: PnP ACPI init Oct 27 07:54:51.361951 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 27 07:54:51.361962 kernel: pnp: PnP ACPI: found 1 devices Oct 27 07:54:51.361969 kernel: NET: Registered PF_INET protocol family Oct 27 07:54:51.361977 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 07:54:51.361985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 07:54:51.361992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 07:54:51.362000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 07:54:51.362009 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 07:54:51.362017 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 07:54:51.362024 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 07:54:51.362032 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 07:54:51.362039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 07:54:51.362046 kernel: PCI: CLS 0 bytes, default 64 Oct 27 07:54:51.362054 kernel: kvm [1]: HYP mode not available Oct 27 07:54:51.362063 kernel: Initialise system trusted keyrings Oct 27 07:54:51.362071 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 07:54:51.362079 kernel: Key type asymmetric registered Oct 27 07:54:51.362086 kernel: Asymmetric key parser 'x509' registered Oct 27 07:54:51.362093 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 27 07:54:51.362101 kernel: io scheduler mq-deadline registered Oct 27 07:54:51.362109 kernel: io scheduler kyber registered Oct 27 07:54:51.362118 kernel: io scheduler bfq registered Oct 27 07:54:51.362125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 27 07:54:51.362133 kernel: ACPI: button: Power Button [PWRB] Oct 27 07:54:51.362141 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 27 07:54:51.362218 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 27 07:54:51.362228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 07:54:51.362236 kernel: thunder_xcv, ver 1.0 Oct 27 07:54:51.362245 kernel: thunder_bgx, ver 1.0 Oct 27 07:54:51.362252 kernel: nicpf, ver 1.0 Oct 27 07:54:51.362260 kernel: nicvf, ver 1.0 Oct 27 07:54:51.362360 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 27 07:54:51.362440 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-27T07:54:50 UTC (1761551690) Oct 27 07:54:51.362450 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 27 07:54:51.362458 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 27 07:54:51.362468 kernel: watchdog: NMI not fully supported Oct 27 07:54:51.362476 kernel: watchdog: Hard watchdog permanently disabled Oct 27 07:54:51.362483 kernel: NET: Registered PF_INET6 protocol family Oct 27 07:54:51.362491 kernel: Segment Routing with IPv6 Oct 27 07:54:51.362499 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 07:54:51.362506 kernel: NET: Registered PF_PACKET protocol family Oct 27 07:54:51.362514 kernel: Key type dns_resolver registered Oct 27 07:54:51.362523 kernel: registered taskstats version 1 Oct 27 07:54:51.362530 kernel: Loading compiled-in X.509 certificates Oct 27 07:54:51.362538 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4752e244308ff0a2d82919d15b3eeaa26e2bfb4e' Oct 27 07:54:51.362546 kernel: Demotion targets for Node 0: null Oct 27 07:54:51.362553 kernel: Key type .fscrypt registered Oct 27 07:54:51.362561 kernel: Key type fscrypt-provisioning registered Oct 27 07:54:51.362568 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 07:54:51.362577 kernel: ima: Allocated hash algorithm: sha1 Oct 27 07:54:51.362584 kernel: ima: No architecture policies found Oct 27 07:54:51.362592 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 27 07:54:51.362600 kernel: clk: Disabling unused clocks Oct 27 07:54:51.362607 kernel: PM: genpd: Disabling unused power domains Oct 27 07:54:51.362615 kernel: Freeing unused kernel memory: 12992K Oct 27 07:54:51.362622 kernel: Run /init as init process Oct 27 07:54:51.362631 kernel: with arguments: Oct 27 07:54:51.362638 kernel: /init Oct 27 07:54:51.362646 kernel: with environment: Oct 27 07:54:51.362653 kernel: HOME=/ Oct 27 07:54:51.362660 kernel: TERM=linux Oct 27 07:54:51.362764 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 27 07:54:51.362845 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 27 07:54:51.362857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 07:54:51.362865 kernel: GPT:16515071 != 27000831 Oct 27 07:54:51.362872 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 07:54:51.362879 kernel: GPT:16515071 != 27000831 Oct 27 07:54:51.362887 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 07:54:51.362894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 07:54:51.362903 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362911 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362918 kernel: SCSI subsystem initialized Oct 27 07:54:51.362925 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 07:54:51.362940 kernel: device-mapper: uevent: version 1.0.3 Oct 27 07:54:51.362948 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 07:54:51.362957 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 27 07:54:51.362965 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362972 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362979 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.362987 kernel: raid6: neonx8 gen() 15750 MB/s Oct 27 07:54:51.362994 kernel: raid6: neonx4 gen() 15764 MB/s Oct 27 07:54:51.363002 kernel: raid6: neonx2 gen() 13212 MB/s Oct 27 07:54:51.363009 kernel: raid6: neonx1 gen() 10435 MB/s Oct 27 07:54:51.363018 kernel: raid6: int64x8 gen() 6855 MB/s Oct 27 07:54:51.363025 kernel: raid6: int64x4 gen() 7330 MB/s Oct 27 07:54:51.363032 kernel: raid6: int64x2 gen() 6102 MB/s Oct 27 07:54:51.363040 kernel: raid6: int64x1 gen() 5055 MB/s Oct 27 07:54:51.363047 kernel: raid6: using algorithm neonx4 gen() 15764 MB/s Oct 27 07:54:51.363055 kernel: raid6: .... xor() 12257 MB/s, rmw enabled Oct 27 07:54:51.363063 kernel: raid6: using neon recovery algorithm Oct 27 07:54:51.363072 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363079 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363086 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363094 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363101 kernel: xor: measuring software checksum speed Oct 27 07:54:51.363109 kernel: 8regs : 21647 MB/sec Oct 27 07:54:51.363116 kernel: 32regs : 21658 MB/sec Oct 27 07:54:51.363124 kernel: arm64_neon : 27946 MB/sec Oct 27 07:54:51.363132 kernel: xor: using function: arm64_neon (27946 MB/sec) Oct 27 07:54:51.363140 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363147 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 07:54:51.363155 kernel: BTRFS: device fsid 9afaa1bd-7ba4-4e53-8ec5-a87987c89a6c devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (204) Oct 27 07:54:51.363163 kernel: BTRFS info (device dm-0): first mount of filesystem 9afaa1bd-7ba4-4e53-8ec5-a87987c89a6c Oct 27 07:54:51.363170 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 27 07:54:51.363178 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 07:54:51.363185 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 07:54:51.363194 kernel: Invalid ELF header magic: != \u007fELF Oct 27 07:54:51.363202 kernel: loop: module loaded Oct 27 07:54:51.363209 kernel: loop0: detected capacity change from 0 to 91464 Oct 27 07:54:51.363217 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 07:54:51.363225 systemd[1]: Successfully made /usr/ read-only. Oct 27 07:54:51.363236 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 07:54:51.363246 systemd[1]: Detected virtualization kvm. Oct 27 07:54:51.363253 systemd[1]: Detected architecture arm64. Oct 27 07:54:51.363261 systemd[1]: Running in initrd. Oct 27 07:54:51.363269 systemd[1]: No hostname configured, using default hostname. Oct 27 07:54:51.363277 systemd[1]: Hostname set to . Oct 27 07:54:51.363285 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 07:54:51.363294 systemd[1]: Queued start job for default target initrd.target. Oct 27 07:54:51.363302 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 07:54:51.363310 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 07:54:51.363319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 07:54:51.363327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 07:54:51.363345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 07:54:51.363370 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 07:54:51.363383 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 07:54:51.363393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 07:54:51.363401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 07:54:51.363410 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 07:54:51.363419 systemd[1]: Reached target paths.target - Path Units. Oct 27 07:54:51.363428 systemd[1]: Reached target slices.target - Slice Units. Oct 27 07:54:51.363436 systemd[1]: Reached target swap.target - Swaps. Oct 27 07:54:51.363444 systemd[1]: Reached target timers.target - Timer Units. Oct 27 07:54:51.363453 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 07:54:51.363461 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 07:54:51.363469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 07:54:51.363478 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 07:54:51.363487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 07:54:51.363495 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 07:54:51.363504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 07:54:51.363512 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 07:54:51.363520 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 07:54:51.363530 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 07:54:51.363539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 07:54:51.363547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 07:54:51.363556 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 07:54:51.363564 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 07:54:51.363572 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 07:54:51.363582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 07:54:51.363591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 07:54:51.363600 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 07:54:51.363609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 07:54:51.363617 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 07:54:51.363627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 07:54:51.363654 systemd-journald[343]: Collecting audit messages is disabled. Oct 27 07:54:51.363681 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 07:54:51.363691 kernel: Bridge firewalling registered Oct 27 07:54:51.363700 systemd-journald[343]: Journal started Oct 27 07:54:51.363719 systemd-journald[343]: Runtime Journal (/run/log/journal/0c271e022ff043d487b5cf5cef1be27d) is 6M, max 48.5M, 42.4M free. Oct 27 07:54:51.361745 systemd-modules-load[344]: Inserted module 'br_netfilter' Oct 27 07:54:51.369110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 07:54:51.372570 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 07:54:51.373149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 07:54:51.375366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 07:54:51.379414 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 07:54:51.381278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 07:54:51.383923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 07:54:51.395915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 07:54:51.404139 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 07:54:51.406424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 07:54:51.408412 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 07:54:51.411405 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 07:54:51.414304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 07:54:51.415608 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 07:54:51.418653 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 07:54:51.437050 dracut-cmdline[387]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=bee5c97bda7b98c2562b3493f0eda24483b61c5bb4f20dc75ba50cb0f724070a Oct 27 07:54:51.460098 systemd-resolved[386]: Positive Trust Anchors: Oct 27 07:54:51.460125 systemd-resolved[386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 07:54:51.460129 systemd-resolved[386]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 07:54:51.460159 systemd-resolved[386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 07:54:51.484035 systemd-resolved[386]: Defaulting to hostname 'linux'. Oct 27 07:54:51.485107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 07:54:51.486458 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 07:54:51.518359 kernel: Loading iSCSI transport class v2.0-870. Oct 27 07:54:51.526362 kernel: iscsi: registered transport (tcp) Oct 27 07:54:51.539706 kernel: iscsi: registered transport (qla4xxx) Oct 27 07:54:51.539749 kernel: QLogic iSCSI HBA Driver Oct 27 07:54:51.559802 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 07:54:51.580644 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 07:54:51.582297 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 07:54:51.628572 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 07:54:51.630953 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 07:54:51.632654 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 07:54:51.666800 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 07:54:51.669799 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 07:54:51.698187 systemd-udevd[630]: Using default interface naming scheme 'v257'. Oct 27 07:54:51.705967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 07:54:51.709460 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 07:54:51.736086 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Oct 27 07:54:51.736170 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 07:54:51.739229 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 07:54:51.761867 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 07:54:51.765856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 07:54:51.788565 systemd-networkd[744]: lo: Link UP Oct 27 07:54:51.788572 systemd-networkd[744]: lo: Gained carrier Oct 27 07:54:51.789479 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 07:54:51.790663 systemd[1]: Reached target network.target - Network. Oct 27 07:54:51.823203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 07:54:51.826311 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 07:54:51.861238 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 07:54:51.870156 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 07:54:51.878772 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 07:54:51.893114 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 07:54:51.902218 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 07:54:51.904939 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 07:54:51.906267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 07:54:51.907562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 07:54:51.910473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 07:54:51.913426 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 07:54:51.916927 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 07:54:51.916943 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 07:54:51.918259 systemd-networkd[744]: eth0: Link UP Oct 27 07:54:51.918852 systemd-networkd[744]: eth0: Gained carrier Oct 27 07:54:51.918862 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 07:54:51.919697 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 07:54:51.919849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 07:54:51.921629 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 07:54:51.932118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 07:54:51.934873 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 07:54:51.941482 disk-uuid[811]: Primary Header is updated. Oct 27 07:54:51.941482 disk-uuid[811]: Secondary Entries is updated. Oct 27 07:54:51.941482 disk-uuid[811]: Secondary Header is updated. Oct 27 07:54:51.942459 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 07:54:51.966484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 07:54:52.973281 disk-uuid[818]: Warning: The kernel is still using the old partition table. Oct 27 07:54:52.973281 disk-uuid[818]: The new table will be used at the next reboot or after you Oct 27 07:54:52.973281 disk-uuid[818]: run partprobe(8) or kpartx(8) Oct 27 07:54:52.973281 disk-uuid[818]: The operation has completed successfully. Oct 27 07:54:52.978640 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 07:54:52.978764 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 07:54:52.981043 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 07:54:53.009365 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Oct 27 07:54:53.012017 kernel: BTRFS info (device vda6): first mount of filesystem 982f77bd-959a-4e7c-ad27-072c75539c37 Oct 27 07:54:53.012065 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 07:54:53.014719 kernel: BTRFS info (device vda6): turning on async discard Oct 27 07:54:53.014751 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 07:54:53.021724 kernel: BTRFS info (device vda6): last unmount of filesystem 982f77bd-959a-4e7c-ad27-072c75539c37 Oct 27 07:54:53.022988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 07:54:53.026124 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 07:54:53.134515 ignition[852]: Ignition 2.22.0 Oct 27 07:54:53.134537 ignition[852]: Stage: fetch-offline Oct 27 07:54:53.134571 ignition[852]: no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:53.134580 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:53.134668 ignition[852]: parsed url from cmdline: "" Oct 27 07:54:53.134671 ignition[852]: no config URL provided Oct 27 07:54:53.134676 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 07:54:53.134686 ignition[852]: no config at "/usr/lib/ignition/user.ign" Oct 27 07:54:53.134722 ignition[852]: op(1): [started] loading QEMU firmware config module Oct 27 07:54:53.134726 ignition[852]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 07:54:53.140650 ignition[852]: op(1): [finished] loading QEMU firmware config module Oct 27 07:54:53.184423 ignition[852]: parsing config with SHA512: d30f267c1be12fb2723e483e118840be157768d9d64a686b52197dfb039806692710ac520e1b69796d6593327e5b287136369050def9ab9946f063ee82253998 Oct 27 07:54:53.190447 unknown[852]: fetched base config from "system" Oct 27 07:54:53.190459 unknown[852]: fetched user config from "qemu" Oct 27 07:54:53.191103 ignition[852]: fetch-offline: fetch-offline passed Oct 27 07:54:53.191205 ignition[852]: Ignition finished successfully Oct 27 07:54:53.197392 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 07:54:53.198850 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 07:54:53.199748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 07:54:53.241509 ignition[863]: Ignition 2.22.0 Oct 27 07:54:53.241527 ignition[863]: Stage: kargs Oct 27 07:54:53.241646 ignition[863]: no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:53.241664 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:53.244438 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 07:54:53.242383 ignition[863]: kargs: kargs passed Oct 27 07:54:53.242420 ignition[863]: Ignition finished successfully Oct 27 07:54:53.251480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 07:54:53.273491 ignition[870]: Ignition 2.22.0 Oct 27 07:54:53.273508 ignition[870]: Stage: disks Oct 27 07:54:53.273636 ignition[870]: no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:53.273643 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:53.274392 ignition[870]: disks: disks passed Oct 27 07:54:53.276675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 07:54:53.274433 ignition[870]: Ignition finished successfully Oct 27 07:54:53.278059 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 07:54:53.279493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 07:54:53.284749 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 07:54:53.286304 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 07:54:53.288450 systemd[1]: Reached target basic.target - Basic System. Oct 27 07:54:53.297533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 07:54:53.323756 systemd-fsck[880]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 27 07:54:53.328664 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 07:54:53.330767 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 07:54:53.394355 kernel: EXT4-fs (vda9): mounted filesystem d768f01c-c0e5-461b-b58d-865d6e0e2a61 r/w with ordered data mode. Quota mode: none. Oct 27 07:54:53.394758 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 07:54:53.396043 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 07:54:53.398587 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 07:54:53.401840 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 07:54:53.402896 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 07:54:53.402925 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 07:54:53.402948 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 07:54:53.419785 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 07:54:53.421820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 07:54:53.432361 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Oct 27 07:54:53.435496 kernel: BTRFS info (device vda6): first mount of filesystem 982f77bd-959a-4e7c-ad27-072c75539c37 Oct 27 07:54:53.435523 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 07:54:53.439205 kernel: BTRFS info (device vda6): turning on async discard Oct 27 07:54:53.439224 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 07:54:53.440906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 07:54:53.464421 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 07:54:53.468514 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Oct 27 07:54:53.472788 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 07:54:53.475975 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 07:54:53.545594 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 07:54:53.548471 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 07:54:53.551218 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 07:54:53.571695 kernel: BTRFS info (device vda6): last unmount of filesystem 982f77bd-959a-4e7c-ad27-072c75539c37 Oct 27 07:54:53.572262 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 07:54:53.586481 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 07:54:53.603603 ignition[1002]: INFO : Ignition 2.22.0 Oct 27 07:54:53.603603 ignition[1002]: INFO : Stage: mount Oct 27 07:54:53.605113 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:53.605113 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:53.605113 ignition[1002]: INFO : mount: mount passed Oct 27 07:54:53.605113 ignition[1002]: INFO : Ignition finished successfully Oct 27 07:54:53.606395 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 07:54:53.613946 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 07:54:53.971494 systemd-networkd[744]: eth0: Gained IPv6LL Oct 27 07:54:54.396399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 07:54:54.415546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Oct 27 07:54:54.415580 kernel: BTRFS info (device vda6): first mount of filesystem 982f77bd-959a-4e7c-ad27-072c75539c37 Oct 27 07:54:54.415592 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 07:54:54.419011 kernel: BTRFS info (device vda6): turning on async discard Oct 27 07:54:54.419040 kernel: BTRFS info (device vda6): enabling free space tree Oct 27 07:54:54.420364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 07:54:54.452442 ignition[1031]: INFO : Ignition 2.22.0 Oct 27 07:54:54.452442 ignition[1031]: INFO : Stage: files Oct 27 07:54:54.454122 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:54.454122 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:54.454122 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Oct 27 07:54:54.457657 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 07:54:54.457657 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 07:54:54.461304 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 07:54:54.462830 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 07:54:54.462830 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 07:54:54.461803 unknown[1031]: wrote ssh authorized keys file for user: core Oct 27 07:54:54.466847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 27 07:54:54.466847 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 27 07:54:54.632128 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 07:54:54.754126 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 27 07:54:54.754126 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 07:54:54.758658 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 27 07:54:54.773454 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 27 07:54:55.044764 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 07:54:55.224273 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 27 07:54:55.224273 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 07:54:55.228435 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 07:54:55.243427 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 07:54:55.246158 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 07:54:55.248407 ignition[1031]: INFO : files: files passed Oct 27 07:54:55.248407 ignition[1031]: INFO : Ignition finished successfully Oct 27 07:54:55.250421 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 07:54:55.254178 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 07:54:55.257005 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 07:54:55.268589 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 07:54:55.268725 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 07:54:55.273976 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 07:54:55.276212 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 07:54:55.276212 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 07:54:55.279468 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 07:54:55.280492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 07:54:55.282410 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 07:54:55.285261 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 07:54:55.351465 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 07:54:55.351595 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 07:54:55.353914 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 07:54:55.355787 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 07:54:55.357827 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 07:54:55.358676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 07:54:55.393325 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 07:54:55.395889 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 07:54:55.420976 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 07:54:55.421172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 07:54:55.423534 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 07:54:55.425608 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 07:54:55.427348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 07:54:55.427468 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 07:54:55.430164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 07:54:55.432316 systemd[1]: Stopped target basic.target - Basic System. Oct 27 07:54:55.434055 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 07:54:55.435809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 07:54:55.438498 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 07:54:55.440697 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 07:54:55.442702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 07:54:55.444528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 07:54:55.446548 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 07:54:55.448522 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 07:54:55.450286 systemd[1]: Stopped target swap.target - Swaps. Oct 27 07:54:55.451906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 07:54:55.452026 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 07:54:55.454525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 07:54:55.455662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 07:54:55.457546 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 07:54:55.458398 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 07:54:55.459632 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 07:54:55.459750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 07:54:55.462359 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 07:54:55.462481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 07:54:55.464781 systemd[1]: Stopped target paths.target - Path Units. Oct 27 07:54:55.466266 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 07:54:55.467146 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 07:54:55.468538 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 07:54:55.470303 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 07:54:55.472428 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 07:54:55.472506 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 07:54:55.474771 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 07:54:55.474847 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 07:54:55.476775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 07:54:55.476887 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 07:54:55.479511 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 07:54:55.479623 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 07:54:55.482043 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 07:54:55.484366 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 07:54:55.485222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 07:54:55.485363 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 07:54:55.487322 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 07:54:55.487437 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 07:54:55.489430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 07:54:55.489538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 07:54:55.495591 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 07:54:55.500501 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 07:54:55.511059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 07:54:55.519958 ignition[1090]: INFO : Ignition 2.22.0 Oct 27 07:54:55.519958 ignition[1090]: INFO : Stage: umount Oct 27 07:54:55.521776 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 07:54:55.521776 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 07:54:55.521776 ignition[1090]: INFO : umount: umount passed Oct 27 07:54:55.521776 ignition[1090]: INFO : Ignition finished successfully Oct 27 07:54:55.523449 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 07:54:55.523538 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 07:54:55.525672 systemd[1]: Stopped target network.target - Network. Oct 27 07:54:55.526759 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 07:54:55.526827 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 07:54:55.530551 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 07:54:55.530612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 07:54:55.532214 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 07:54:55.532268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 07:54:55.534090 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 07:54:55.534136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 07:54:55.536137 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 07:54:55.539943 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 07:54:55.549420 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 07:54:55.549530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 07:54:55.553698 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 07:54:55.553786 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 07:54:55.558011 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 07:54:55.559089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 07:54:55.562414 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 07:54:55.564597 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 07:54:55.564668 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 07:54:55.566796 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 07:54:55.566857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 07:54:55.569686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 07:54:55.570693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 07:54:55.570765 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 07:54:55.573109 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 07:54:55.573161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 07:54:55.575052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 07:54:55.575099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 07:54:55.577136 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 07:54:55.591667 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 07:54:55.591832 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 07:54:55.596953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 07:54:55.597035 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 07:54:55.599245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 07:54:55.599279 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 07:54:55.601261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 07:54:55.601312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 07:54:55.604246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 07:54:55.604301 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 07:54:55.607174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 07:54:55.607224 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 07:54:55.619058 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 07:54:55.620369 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 07:54:55.620441 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 07:54:55.623440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 07:54:55.623492 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 07:54:55.625234 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 07:54:55.625283 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 07:54:55.627680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 07:54:55.627722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 07:54:55.629704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 07:54:55.629753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 07:54:55.632503 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 07:54:55.632616 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 07:54:55.634019 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 07:54:55.634089 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 07:54:55.636966 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 07:54:55.639121 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 07:54:55.656924 systemd[1]: Switching root. Oct 27 07:54:55.679387 systemd-journald[343]: Journal stopped Oct 27 07:54:56.421140 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Oct 27 07:54:56.421186 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 07:54:56.421203 kernel: SELinux: policy capability open_perms=1 Oct 27 07:54:56.421216 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 07:54:56.421227 kernel: SELinux: policy capability always_check_network=0 Oct 27 07:54:56.421238 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 07:54:56.421250 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 07:54:56.421260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 07:54:56.421270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 07:54:56.421279 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 07:54:56.421290 kernel: audit: type=1403 audit(1761551695.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 07:54:56.421300 systemd[1]: Successfully loaded SELinux policy in 60.910ms. Oct 27 07:54:56.421313 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.277ms. Oct 27 07:54:56.421326 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 07:54:56.421411 systemd[1]: Detected virtualization kvm. Oct 27 07:54:56.421424 systemd[1]: Detected architecture arm64. Oct 27 07:54:56.421435 systemd[1]: Detected first boot. Oct 27 07:54:56.421446 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 07:54:56.421460 zram_generator::config[1139]: No configuration found. Oct 27 07:54:56.421471 kernel: NET: Registered PF_VSOCK protocol family Oct 27 07:54:56.421483 systemd[1]: Populated /etc with preset unit settings. Oct 27 07:54:56.421496 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 07:54:56.421506 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 07:54:56.421517 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 07:54:56.421528 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 07:54:56.421539 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 07:54:56.421549 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 07:54:56.421561 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 07:54:56.421571 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 07:54:56.421582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 07:54:56.421593 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 07:54:56.421603 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 07:54:56.421613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 07:54:56.421624 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 07:54:56.421644 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 07:54:56.421656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 07:54:56.421667 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 07:54:56.421678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 07:54:56.421689 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 27 07:54:56.421700 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 07:54:56.421711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 07:54:56.421723 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 07:54:56.421734 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 07:54:56.421744 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 07:54:56.421755 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 07:54:56.421765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 07:54:56.421775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 07:54:56.421788 systemd[1]: Reached target slices.target - Slice Units. Oct 27 07:54:56.421799 systemd[1]: Reached target swap.target - Swaps. Oct 27 07:54:56.421810 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 07:54:56.421821 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 07:54:56.421831 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 07:54:56.421841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 07:54:56.421853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 07:54:56.421864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 07:54:56.421874 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 07:54:56.421885 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 07:54:56.421895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 07:54:56.421906 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 07:54:56.421916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 07:54:56.421926 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 07:54:56.421938 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 07:54:56.421950 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 07:54:56.421960 systemd[1]: Reached target machines.target - Containers. Oct 27 07:54:56.421971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 07:54:56.421982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 07:54:56.421992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 07:54:56.422003 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 07:54:56.422016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 07:54:56.422027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 07:54:56.422040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 07:54:56.422051 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 07:54:56.422062 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 07:54:56.422073 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 07:54:56.422084 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 07:54:56.422097 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 07:54:56.422108 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 07:54:56.422119 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 07:54:56.422131 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 07:54:56.422143 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 07:54:56.422153 kernel: ACPI: bus type drm_connector registered Oct 27 07:54:56.422163 kernel: fuse: init (API version 7.41) Oct 27 07:54:56.422175 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 07:54:56.422186 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 07:54:56.422197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 07:54:56.422208 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 07:54:56.422219 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 07:54:56.422231 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 07:54:56.422241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 07:54:56.422268 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 07:54:56.422297 systemd-journald[1212]: Collecting audit messages is disabled. Oct 27 07:54:56.422322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 07:54:56.422344 systemd-journald[1212]: Journal started Oct 27 07:54:56.422367 systemd-journald[1212]: Runtime Journal (/run/log/journal/0c271e022ff043d487b5cf5cef1be27d) is 6M, max 48.5M, 42.4M free. Oct 27 07:54:56.214031 systemd[1]: Queued start job for default target multi-user.target. Oct 27 07:54:56.225231 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 07:54:56.225690 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 07:54:56.425292 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 07:54:56.426255 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 07:54:56.427598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 07:54:56.429450 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 07:54:56.430876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 07:54:56.432330 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 07:54:56.432504 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 07:54:56.433893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 07:54:56.434042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 07:54:56.435505 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 07:54:56.435673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 07:54:56.437092 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 07:54:56.437247 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 07:54:56.438915 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 07:54:56.439072 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 07:54:56.440456 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 07:54:56.440603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 07:54:56.442040 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 07:54:56.443676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 07:54:56.445847 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 07:54:56.447607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 07:54:56.459703 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 07:54:56.461174 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 07:54:56.463461 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 07:54:56.465430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 07:54:56.466610 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 07:54:56.466652 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 07:54:56.468500 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 07:54:56.469893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 07:54:56.475265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 07:54:56.477326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 07:54:56.478489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 07:54:56.479292 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 07:54:56.480557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 07:54:56.484479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 07:54:56.485004 systemd-journald[1212]: Time spent on flushing to /var/log/journal/0c271e022ff043d487b5cf5cef1be27d is 15.446ms for 882 entries. Oct 27 07:54:56.485004 systemd-journald[1212]: System Journal (/var/log/journal/0c271e022ff043d487b5cf5cef1be27d) is 8M, max 163.5M, 155.5M free. Oct 27 07:54:56.514325 systemd-journald[1212]: Received client request to flush runtime journal. Oct 27 07:54:56.514431 kernel: loop1: detected capacity change from 0 to 200800 Oct 27 07:54:56.487578 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 07:54:56.489691 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 07:54:56.497502 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 07:54:56.499545 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 07:54:56.501630 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 07:54:56.504006 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 07:54:56.506545 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 07:54:56.511665 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 07:54:56.516071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 07:54:56.518178 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 27 07:54:56.518197 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 27 07:54:56.522531 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 07:54:56.524874 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 07:54:56.529112 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 07:54:56.539525 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 07:54:56.542388 kernel: loop2: detected capacity change from 0 to 119344 Oct 27 07:54:56.555789 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 07:54:56.560536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 07:54:56.562481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 07:54:56.567400 kernel: loop3: detected capacity change from 0 to 100624 Oct 27 07:54:56.571786 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 07:54:56.577529 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Oct 27 07:54:56.577787 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Oct 27 07:54:56.586754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 07:54:56.602368 kernel: loop4: detected capacity change from 0 to 200800 Oct 27 07:54:56.609355 kernel: loop5: detected capacity change from 0 to 119344 Oct 27 07:54:56.610537 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 07:54:56.617350 kernel: loop6: detected capacity change from 0 to 100624 Oct 27 07:54:56.620543 (sd-merge)[1279]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 27 07:54:56.623551 (sd-merge)[1279]: Merged extensions into '/usr'. Oct 27 07:54:56.629949 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 07:54:56.629965 systemd[1]: Reloading... Oct 27 07:54:56.670082 systemd-resolved[1273]: Positive Trust Anchors: Oct 27 07:54:56.670103 systemd-resolved[1273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 07:54:56.670106 systemd-resolved[1273]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 07:54:56.670138 systemd-resolved[1273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 07:54:56.678372 systemd-resolved[1273]: Defaulting to hostname 'linux'. Oct 27 07:54:56.684357 zram_generator::config[1312]: No configuration found. Oct 27 07:54:56.814971 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 07:54:56.815286 systemd[1]: Reloading finished in 185 ms. Oct 27 07:54:56.845883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 07:54:56.847479 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 07:54:56.850615 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 07:54:56.863541 systemd[1]: Starting ensure-sysext.service... Oct 27 07:54:56.865372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 07:54:56.877418 systemd[1]: Reload requested from client PID 1346 ('systemctl') (unit ensure-sysext.service)... Oct 27 07:54:56.877434 systemd[1]: Reloading... Oct 27 07:54:56.880958 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 07:54:56.881101 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 07:54:56.881350 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 07:54:56.881551 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 07:54:56.882147 systemd-tmpfiles[1347]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 07:54:56.882430 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Oct 27 07:54:56.882486 systemd-tmpfiles[1347]: ACLs are not supported, ignoring. Oct 27 07:54:56.885974 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 07:54:56.885989 systemd-tmpfiles[1347]: Skipping /boot Oct 27 07:54:56.892038 systemd-tmpfiles[1347]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 07:54:56.892047 systemd-tmpfiles[1347]: Skipping /boot Oct 27 07:54:56.929397 zram_generator::config[1377]: No configuration found. Oct 27 07:54:57.055591 systemd[1]: Reloading finished in 177 ms. Oct 27 07:54:57.075793 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 07:54:57.094509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 07:54:57.102053 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 07:54:57.104417 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 07:54:57.124586 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 07:54:57.127173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 07:54:57.131538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 07:54:57.134358 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 07:54:57.138157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 07:54:57.140181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 07:54:57.143057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 07:54:57.147670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 07:54:57.149486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 07:54:57.149607 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 07:54:57.150675 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 07:54:57.154592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 07:54:57.154758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 07:54:57.157100 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 07:54:57.157515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 07:54:57.166002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 07:54:57.176513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 07:54:57.176745 systemd-udevd[1418]: Using default interface naming scheme 'v257'. Oct 27 07:54:57.179025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 07:54:57.180701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 07:54:57.180869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 07:54:57.183384 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 07:54:57.186005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 07:54:57.187505 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 07:54:57.189896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 07:54:57.190033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 07:54:57.192244 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 07:54:57.192433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 07:54:57.196910 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 07:54:57.199310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 07:54:57.203472 augenrules[1455]: No rules Oct 27 07:54:57.208940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 07:54:57.210063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 07:54:57.213559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 07:54:57.216498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 07:54:57.219771 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 07:54:57.221591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 07:54:57.221644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 07:54:57.232466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 07:54:57.234418 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 07:54:57.243370 systemd[1]: Finished ensure-sysext.service. Oct 27 07:54:57.245528 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 07:54:57.245768 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 07:54:57.247280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 07:54:57.248198 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 07:54:57.250742 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 07:54:57.250902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 07:54:57.252478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 07:54:57.252641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 07:54:57.254718 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 07:54:57.254873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 07:54:57.268716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 07:54:57.271342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 07:54:57.273607 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 07:54:57.318121 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 27 07:54:57.321921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 07:54:57.327477 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 07:54:57.337887 systemd-networkd[1479]: lo: Link UP Oct 27 07:54:57.337895 systemd-networkd[1479]: lo: Gained carrier Oct 27 07:54:57.338908 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 07:54:57.339488 systemd-networkd[1479]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 07:54:57.339492 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 07:54:57.340259 systemd-networkd[1479]: eth0: Link UP Oct 27 07:54:57.340528 systemd-networkd[1479]: eth0: Gained carrier Oct 27 07:54:57.340545 systemd-networkd[1479]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 07:54:57.341052 systemd[1]: Reached target network.target - Network. Oct 27 07:54:57.343775 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 07:54:57.347006 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 07:54:57.358302 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 07:54:57.362957 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 07:54:57.364365 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 07:54:57.366233 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 07:54:57.367682 systemd-timesyncd[1489]: Network configuration changed, trying to establish connection. Oct 27 07:54:57.368696 systemd-timesyncd[1489]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 07:54:57.369520 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 07:54:57.371409 systemd-timesyncd[1489]: Initial clock synchronization to Mon 2025-10-27 07:54:57.272508 UTC. Oct 27 07:54:57.444713 ldconfig[1415]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 07:54:57.446081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 07:54:57.464412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 07:54:57.467231 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 07:54:57.492458 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 07:54:57.494135 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 07:54:57.497028 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 07:54:57.498393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 07:54:57.499746 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 07:54:57.501307 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 07:54:57.502592 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 07:54:57.504121 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 07:54:57.505518 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 07:54:57.505555 systemd[1]: Reached target paths.target - Path Units. Oct 27 07:54:57.506537 systemd[1]: Reached target timers.target - Timer Units. Oct 27 07:54:57.508165 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 07:54:57.510544 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 07:54:57.513295 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 07:54:57.514938 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 07:54:57.516386 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 07:54:57.529191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 07:54:57.530874 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 07:54:57.532765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 07:54:57.534063 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 07:54:57.535136 systemd[1]: Reached target basic.target - Basic System. Oct 27 07:54:57.536225 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 07:54:57.536258 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 07:54:57.537172 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 07:54:57.539262 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 07:54:57.541210 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 07:54:57.543398 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 07:54:57.546507 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 07:54:57.547672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 07:54:57.549493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 07:54:57.552150 jq[1536]: false Oct 27 07:54:57.552438 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 07:54:57.554478 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 07:54:57.555500 extend-filesystems[1537]: Found /dev/vda6 Oct 27 07:54:57.557805 extend-filesystems[1537]: Found /dev/vda9 Oct 27 07:54:57.557793 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 07:54:57.561330 extend-filesystems[1537]: Checking size of /dev/vda9 Oct 27 07:54:57.560901 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 07:54:57.562481 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 07:54:57.562860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 07:54:57.565771 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 07:54:57.569186 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 07:54:57.576394 extend-filesystems[1537]: Resized partition /dev/vda9 Oct 27 07:54:57.574367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 07:54:57.576031 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 07:54:57.577480 jq[1553]: true Oct 27 07:54:57.576181 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 07:54:57.578718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 07:54:57.580927 extend-filesystems[1564]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 07:54:57.578940 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 07:54:57.581660 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 07:54:57.581830 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 07:54:57.587552 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 27 07:54:57.598041 update_engine[1550]: I20251027 07:54:57.597836 1550 main.cc:92] Flatcar Update Engine starting Oct 27 07:54:57.605107 tar[1565]: linux-arm64/LICENSE Oct 27 07:54:57.606086 tar[1565]: linux-arm64/helm Oct 27 07:54:57.610883 jq[1570]: true Oct 27 07:54:57.612912 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 07:54:57.624365 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 27 07:54:57.633732 dbus-daemon[1534]: [system] SELinux support is enabled Oct 27 07:54:57.634119 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 07:54:57.648041 extend-filesystems[1564]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 07:54:57.648041 extend-filesystems[1564]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 07:54:57.648041 extend-filesystems[1564]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 27 07:54:57.654043 update_engine[1550]: I20251027 07:54:57.646610 1550 update_check_scheduler.cc:74] Next update check in 7m33s Oct 27 07:54:57.638798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 07:54:57.654137 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Oct 27 07:54:57.638823 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 07:54:57.640383 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 07:54:57.640405 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 07:54:57.646500 systemd[1]: Started update-engine.service - Update Engine. Oct 27 07:54:57.649910 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 07:54:57.650092 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 07:54:57.655620 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 07:54:57.659496 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Oct 27 07:54:57.664432 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 07:54:57.666441 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 07:54:57.682559 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (Power Button) Oct 27 07:54:57.682937 systemd-logind[1548]: New seat seat0. Oct 27 07:54:57.683680 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 07:54:57.722071 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 07:54:57.790052 containerd[1582]: time="2025-10-27T07:54:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 07:54:57.791783 containerd[1582]: time="2025-10-27T07:54:57.791746440Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 07:54:57.806264 containerd[1582]: time="2025-10-27T07:54:57.806222440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.12µs" Oct 27 07:54:57.806392 containerd[1582]: time="2025-10-27T07:54:57.806372880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 07:54:57.806450 containerd[1582]: time="2025-10-27T07:54:57.806436280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 07:54:57.806643 containerd[1582]: time="2025-10-27T07:54:57.806611000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 07:54:57.806710 containerd[1582]: time="2025-10-27T07:54:57.806695720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 07:54:57.806774 containerd[1582]: time="2025-10-27T07:54:57.806760960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 07:54:57.806897 containerd[1582]: time="2025-10-27T07:54:57.806877680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 07:54:57.806962 containerd[1582]: time="2025-10-27T07:54:57.806948680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 07:54:57.807209 containerd[1582]: time="2025-10-27T07:54:57.807180960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 07:54:57.807271 containerd[1582]: time="2025-10-27T07:54:57.807256720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 07:54:57.807321 containerd[1582]: time="2025-10-27T07:54:57.807307280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 07:54:57.807399 containerd[1582]: time="2025-10-27T07:54:57.807384960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.807510760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.807730560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.807763000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.807772400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.807815120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.808074880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 07:54:57.808379 containerd[1582]: time="2025-10-27T07:54:57.808137080Z" level=info msg="metadata content store policy set" policy=shared Oct 27 07:54:57.811932 containerd[1582]: time="2025-10-27T07:54:57.811905040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 07:54:57.812066 containerd[1582]: time="2025-10-27T07:54:57.812051880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 07:54:57.812135 containerd[1582]: time="2025-10-27T07:54:57.812121680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 07:54:57.812228 containerd[1582]: time="2025-10-27T07:54:57.812214360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 07:54:57.812297 containerd[1582]: time="2025-10-27T07:54:57.812281960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 07:54:57.812374 containerd[1582]: time="2025-10-27T07:54:57.812359720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 07:54:57.812427 containerd[1582]: time="2025-10-27T07:54:57.812416080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 07:54:57.812481 containerd[1582]: time="2025-10-27T07:54:57.812467760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 07:54:57.812533 containerd[1582]: time="2025-10-27T07:54:57.812519800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 07:54:57.812588 containerd[1582]: time="2025-10-27T07:54:57.812570440Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 07:54:57.812649 containerd[1582]: time="2025-10-27T07:54:57.812634080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 07:54:57.812712 containerd[1582]: time="2025-10-27T07:54:57.812697440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 07:54:57.812888 containerd[1582]: time="2025-10-27T07:54:57.812865960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 07:54:57.812961 containerd[1582]: time="2025-10-27T07:54:57.812946600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 07:54:57.813026 containerd[1582]: time="2025-10-27T07:54:57.813011960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 07:54:57.813078 containerd[1582]: time="2025-10-27T07:54:57.813065480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 07:54:57.813130 containerd[1582]: time="2025-10-27T07:54:57.813116760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 07:54:57.813182 containerd[1582]: time="2025-10-27T07:54:57.813168680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 07:54:57.813249 containerd[1582]: time="2025-10-27T07:54:57.813234160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 07:54:57.813306 containerd[1582]: time="2025-10-27T07:54:57.813292680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 07:54:57.813388 containerd[1582]: time="2025-10-27T07:54:57.813372080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 07:54:57.813451 containerd[1582]: time="2025-10-27T07:54:57.813436680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 07:54:57.813519 containerd[1582]: time="2025-10-27T07:54:57.813504560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 07:54:57.813763 containerd[1582]: time="2025-10-27T07:54:57.813746040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 07:54:57.813831 containerd[1582]: time="2025-10-27T07:54:57.813817960Z" level=info msg="Start snapshots syncer" Oct 27 07:54:57.813899 containerd[1582]: time="2025-10-27T07:54:57.813886760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 07:54:57.814272 containerd[1582]: time="2025-10-27T07:54:57.814232840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 07:54:57.814465 containerd[1582]: time="2025-10-27T07:54:57.814446120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 07:54:57.814597 containerd[1582]: time="2025-10-27T07:54:57.814581280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 07:54:57.814816 containerd[1582]: time="2025-10-27T07:54:57.814791280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 07:54:57.814929 containerd[1582]: time="2025-10-27T07:54:57.814914080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 07:54:57.814994 containerd[1582]: time="2025-10-27T07:54:57.814981040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 07:54:57.815051 containerd[1582]: time="2025-10-27T07:54:57.815039400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 07:54:57.815104 containerd[1582]: time="2025-10-27T07:54:57.815091720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 07:54:57.815165 containerd[1582]: time="2025-10-27T07:54:57.815152240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 07:54:57.815215 containerd[1582]: time="2025-10-27T07:54:57.815203160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 07:54:57.815294 containerd[1582]: time="2025-10-27T07:54:57.815280160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 07:54:57.815372 containerd[1582]: time="2025-10-27T07:54:57.815356400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 07:54:57.815426 containerd[1582]: time="2025-10-27T07:54:57.815412280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 07:54:57.815534 containerd[1582]: time="2025-10-27T07:54:57.815520480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 07:54:57.815697 containerd[1582]: time="2025-10-27T07:54:57.815679400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 07:54:57.815755 containerd[1582]: time="2025-10-27T07:54:57.815740160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 07:54:57.815803 containerd[1582]: time="2025-10-27T07:54:57.815790560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 07:54:57.815848 containerd[1582]: time="2025-10-27T07:54:57.815835400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 07:54:57.815903 containerd[1582]: time="2025-10-27T07:54:57.815888560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 07:54:57.815958 containerd[1582]: time="2025-10-27T07:54:57.815945080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 07:54:57.816076 containerd[1582]: time="2025-10-27T07:54:57.816063720Z" level=info msg="runtime interface created" Oct 27 07:54:57.816130 containerd[1582]: time="2025-10-27T07:54:57.816119080Z" level=info msg="created NRI interface" Oct 27 07:54:57.816177 containerd[1582]: time="2025-10-27T07:54:57.816165160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 07:54:57.816225 containerd[1582]: time="2025-10-27T07:54:57.816214840Z" level=info msg="Connect containerd service" Oct 27 07:54:57.816314 containerd[1582]: time="2025-10-27T07:54:57.816299160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 07:54:57.817046 containerd[1582]: time="2025-10-27T07:54:57.817014400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 07:54:57.883406 containerd[1582]: time="2025-10-27T07:54:57.883218600Z" level=info msg="Start subscribing containerd event" Oct 27 07:54:57.883406 containerd[1582]: time="2025-10-27T07:54:57.883383760Z" level=info msg="Start recovering state" Oct 27 07:54:57.884057 containerd[1582]: time="2025-10-27T07:54:57.884037760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 07:54:57.884151 containerd[1582]: time="2025-10-27T07:54:57.884091000Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 07:54:57.884509 containerd[1582]: time="2025-10-27T07:54:57.884487720Z" level=info msg="Start event monitor" Oct 27 07:54:57.884539 containerd[1582]: time="2025-10-27T07:54:57.884521120Z" level=info msg="Start cni network conf syncer for default" Oct 27 07:54:57.884539 containerd[1582]: time="2025-10-27T07:54:57.884530880Z" level=info msg="Start streaming server" Oct 27 07:54:57.884573 containerd[1582]: time="2025-10-27T07:54:57.884540200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 07:54:57.884573 containerd[1582]: time="2025-10-27T07:54:57.884547880Z" level=info msg="runtime interface starting up..." Oct 27 07:54:57.884573 containerd[1582]: time="2025-10-27T07:54:57.884563400Z" level=info msg="starting plugins..." Oct 27 07:54:57.884639 containerd[1582]: time="2025-10-27T07:54:57.884583720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 07:54:57.884878 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 07:54:57.885015 containerd[1582]: time="2025-10-27T07:54:57.884993280Z" level=info msg="containerd successfully booted in 0.095292s" Oct 27 07:54:57.929424 tar[1565]: linux-arm64/README.md Oct 27 07:54:57.948405 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 07:54:58.607379 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 07:54:58.626436 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 07:54:58.629839 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 07:54:58.651687 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 07:54:58.651931 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 07:54:58.654916 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 07:54:58.676872 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 07:54:58.679803 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 07:54:58.682000 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 27 07:54:58.683467 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 07:54:59.347469 systemd-networkd[1479]: eth0: Gained IPv6LL Oct 27 07:54:59.351393 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 07:54:59.353256 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 07:54:59.355806 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 07:54:59.358305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:54:59.372693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 07:54:59.390591 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 07:54:59.390916 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 07:54:59.393385 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 07:54:59.395260 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 07:54:59.899731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:54:59.901486 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 07:54:59.902785 systemd[1]: Startup finished in 1.230s (kernel) + 4.728s (initrd) + 4.099s (userspace) = 10.058s. Oct 27 07:54:59.903613 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 07:55:00.202112 kubelet[1672]: E1027 07:55:00.202001 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 07:55:00.204158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 07:55:00.204285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 07:55:00.206404 systemd[1]: kubelet.service: Consumed 686ms CPU time, 248.5M memory peak. Oct 27 07:55:02.212859 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 07:55:02.213939 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:46656.service - OpenSSH per-connection server daemon (10.0.0.1:46656). Oct 27 07:55:02.292835 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 46656 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:02.294764 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:02.300737 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 07:55:02.301616 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 07:55:02.306526 systemd-logind[1548]: New session 1 of user core. Oct 27 07:55:02.321614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 07:55:02.323956 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 07:55:02.339207 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 07:55:02.341362 systemd-logind[1548]: New session c1 of user core. Oct 27 07:55:02.442931 systemd[1690]: Queued start job for default target default.target. Oct 27 07:55:02.463222 systemd[1690]: Created slice app.slice - User Application Slice. Oct 27 07:55:02.463253 systemd[1690]: Reached target paths.target - Paths. Oct 27 07:55:02.463291 systemd[1690]: Reached target timers.target - Timers. Oct 27 07:55:02.464546 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 07:55:02.474426 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 07:55:02.474578 systemd[1690]: Reached target sockets.target - Sockets. Oct 27 07:55:02.474621 systemd[1690]: Reached target basic.target - Basic System. Oct 27 07:55:02.474650 systemd[1690]: Reached target default.target - Main User Target. Oct 27 07:55:02.474675 systemd[1690]: Startup finished in 127ms. Oct 27 07:55:02.474817 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 07:55:02.476217 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 07:55:02.538269 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Oct 27 07:55:02.593273 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:02.594474 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:02.598089 systemd-logind[1548]: New session 2 of user core. Oct 27 07:55:02.607463 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 07:55:02.659658 sshd[1704]: Connection closed by 10.0.0.1 port 46670 Oct 27 07:55:02.660083 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:02.672191 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:46670.service: Deactivated successfully. Oct 27 07:55:02.673643 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 07:55:02.674844 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Oct 27 07:55:02.677815 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Oct 27 07:55:02.678480 systemd-logind[1548]: Removed session 2. Oct 27 07:55:02.737344 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:02.738532 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:02.743069 systemd-logind[1548]: New session 3 of user core. Oct 27 07:55:02.751527 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 07:55:02.799187 sshd[1713]: Connection closed by 10.0.0.1 port 46686 Oct 27 07:55:02.799583 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:02.810304 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:46686.service: Deactivated successfully. Oct 27 07:55:02.812845 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 07:55:02.814613 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Oct 27 07:55:02.816965 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:46690.service - OpenSSH per-connection server daemon (10.0.0.1:46690). Oct 27 07:55:02.817456 systemd-logind[1548]: Removed session 3. Oct 27 07:55:02.877502 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 46690 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:02.878747 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:02.882379 systemd-logind[1548]: New session 4 of user core. Oct 27 07:55:02.890490 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 07:55:02.940984 sshd[1722]: Connection closed by 10.0.0.1 port 46690 Oct 27 07:55:02.941311 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:02.961550 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:46690.service: Deactivated successfully. Oct 27 07:55:02.963128 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 07:55:02.963849 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Oct 27 07:55:02.966353 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:46702.service - OpenSSH per-connection server daemon (10.0.0.1:46702). Oct 27 07:55:02.966907 systemd-logind[1548]: Removed session 4. Oct 27 07:55:03.018614 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 46702 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:03.019711 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:03.024196 systemd-logind[1548]: New session 5 of user core. Oct 27 07:55:03.037498 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 07:55:03.093524 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 07:55:03.093782 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 07:55:03.115149 sudo[1732]: pam_unix(sudo:session): session closed for user root Oct 27 07:55:03.116819 sshd[1731]: Connection closed by 10.0.0.1 port 46702 Oct 27 07:55:03.117181 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:03.127254 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:46702.service: Deactivated successfully. Oct 27 07:55:03.128814 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 07:55:03.129474 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Oct 27 07:55:03.131796 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:46712.service - OpenSSH per-connection server daemon (10.0.0.1:46712). Oct 27 07:55:03.132256 systemd-logind[1548]: Removed session 5. Oct 27 07:55:03.185226 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 46712 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:03.186468 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:03.190289 systemd-logind[1548]: New session 6 of user core. Oct 27 07:55:03.199482 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 07:55:03.250556 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 07:55:03.250822 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 07:55:03.255367 sudo[1743]: pam_unix(sudo:session): session closed for user root Oct 27 07:55:03.261276 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 07:55:03.261570 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 07:55:03.270981 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 07:55:03.310868 augenrules[1765]: No rules Oct 27 07:55:03.312125 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 07:55:03.312405 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 07:55:03.314509 sudo[1742]: pam_unix(sudo:session): session closed for user root Oct 27 07:55:03.316367 sshd[1741]: Connection closed by 10.0.0.1 port 46712 Oct 27 07:55:03.316256 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:03.328323 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:46712.service: Deactivated successfully. Oct 27 07:55:03.331684 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 07:55:03.332316 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Oct 27 07:55:03.334407 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:46716.service - OpenSSH per-connection server daemon (10.0.0.1:46716). Oct 27 07:55:03.334836 systemd-logind[1548]: Removed session 6. Oct 27 07:55:03.399221 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 46716 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:55:03.400275 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:55:03.403899 systemd-logind[1548]: New session 7 of user core. Oct 27 07:55:03.413482 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 07:55:03.464622 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 07:55:03.464880 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 07:55:03.733838 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 07:55:03.748606 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 07:55:03.941406 dockerd[1799]: time="2025-10-27T07:55:03.941316762Z" level=info msg="Starting up" Oct 27 07:55:03.942068 dockerd[1799]: time="2025-10-27T07:55:03.942050652Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 07:55:03.951556 dockerd[1799]: time="2025-10-27T07:55:03.951515647Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 07:55:04.099263 dockerd[1799]: time="2025-10-27T07:55:04.098984677Z" level=info msg="Loading containers: start." Oct 27 07:55:04.107357 kernel: Initializing XFRM netlink socket Oct 27 07:55:04.285817 systemd-networkd[1479]: docker0: Link UP Oct 27 07:55:04.288617 dockerd[1799]: time="2025-10-27T07:55:04.288585306Z" level=info msg="Loading containers: done." Oct 27 07:55:04.299788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2365259668-merged.mount: Deactivated successfully. Oct 27 07:55:04.303358 dockerd[1799]: time="2025-10-27T07:55:04.303174261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 07:55:04.303358 dockerd[1799]: time="2025-10-27T07:55:04.303244679Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 07:55:04.303358 dockerd[1799]: time="2025-10-27T07:55:04.303311795Z" level=info msg="Initializing buildkit" Oct 27 07:55:04.322651 dockerd[1799]: time="2025-10-27T07:55:04.322624025Z" level=info msg="Completed buildkit initialization" Oct 27 07:55:04.328873 dockerd[1799]: time="2025-10-27T07:55:04.328838736Z" level=info msg="Daemon has completed initialization" Oct 27 07:55:04.328951 dockerd[1799]: time="2025-10-27T07:55:04.328895946Z" level=info msg="API listen on /run/docker.sock" Oct 27 07:55:04.329138 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 07:55:04.728407 containerd[1582]: time="2025-10-27T07:55:04.728370107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 27 07:55:05.279199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299015309.mount: Deactivated successfully. Oct 27 07:55:06.169003 containerd[1582]: time="2025-10-27T07:55:06.168959956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:06.170092 containerd[1582]: time="2025-10-27T07:55:06.170048922Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Oct 27 07:55:06.170714 containerd[1582]: time="2025-10-27T07:55:06.170688073Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:06.173198 containerd[1582]: time="2025-10-27T07:55:06.173172335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:06.174376 containerd[1582]: time="2025-10-27T07:55:06.174150281Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.445740061s" Oct 27 07:55:06.174376 containerd[1582]: time="2025-10-27T07:55:06.174196131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 27 07:55:06.175107 containerd[1582]: time="2025-10-27T07:55:06.175063257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 27 07:55:07.123347 containerd[1582]: time="2025-10-27T07:55:07.123292308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:07.123916 containerd[1582]: time="2025-10-27T07:55:07.123882848Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Oct 27 07:55:07.125309 containerd[1582]: time="2025-10-27T07:55:07.124921921Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:07.127897 containerd[1582]: time="2025-10-27T07:55:07.127862939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:07.128624 containerd[1582]: time="2025-10-27T07:55:07.128592375Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 953.494019ms" Oct 27 07:55:07.128695 containerd[1582]: time="2025-10-27T07:55:07.128625894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 27 07:55:07.129049 containerd[1582]: time="2025-10-27T07:55:07.129021539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 27 07:55:08.021994 containerd[1582]: time="2025-10-27T07:55:08.021953179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:08.023372 containerd[1582]: time="2025-10-27T07:55:08.023341617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Oct 27 07:55:08.024276 containerd[1582]: time="2025-10-27T07:55:08.024249977Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:08.027538 containerd[1582]: time="2025-10-27T07:55:08.027500631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:08.028667 containerd[1582]: time="2025-10-27T07:55:08.028628695Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 899.576662ms" Oct 27 07:55:08.028706 containerd[1582]: time="2025-10-27T07:55:08.028667691Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 27 07:55:08.029423 containerd[1582]: time="2025-10-27T07:55:08.029367353Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 27 07:55:09.042359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239659669.mount: Deactivated successfully. Oct 27 07:55:09.202947 containerd[1582]: time="2025-10-27T07:55:09.202469568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:09.202947 containerd[1582]: time="2025-10-27T07:55:09.202836311Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Oct 27 07:55:09.203931 containerd[1582]: time="2025-10-27T07:55:09.203900919Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:09.205900 containerd[1582]: time="2025-10-27T07:55:09.205873888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:09.206950 containerd[1582]: time="2025-10-27T07:55:09.206548497Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.17711814s" Oct 27 07:55:09.206950 containerd[1582]: time="2025-10-27T07:55:09.206595367Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 27 07:55:09.207168 containerd[1582]: time="2025-10-27T07:55:09.207140017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 27 07:55:09.693529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082362973.mount: Deactivated successfully. Oct 27 07:55:10.450348 containerd[1582]: time="2025-10-27T07:55:10.450285104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:10.450730 containerd[1582]: time="2025-10-27T07:55:10.450704846Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Oct 27 07:55:10.452009 containerd[1582]: time="2025-10-27T07:55:10.451955133Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:10.454665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 07:55:10.455573 containerd[1582]: time="2025-10-27T07:55:10.455546581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:10.456606 containerd[1582]: time="2025-10-27T07:55:10.456576962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.249400404s" Oct 27 07:55:10.456657 containerd[1582]: time="2025-10-27T07:55:10.456611319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 27 07:55:10.458241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:10.458921 containerd[1582]: time="2025-10-27T07:55:10.458708871Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 27 07:55:10.589957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:10.602635 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 07:55:10.636041 kubelet[2154]: E1027 07:55:10.635977 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 07:55:10.638785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 07:55:10.638916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 07:55:10.639389 systemd[1]: kubelet.service: Consumed 146ms CPU time, 106.8M memory peak. Oct 27 07:55:11.031798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334836653.mount: Deactivated successfully. Oct 27 07:55:11.035700 containerd[1582]: time="2025-10-27T07:55:11.035657651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:11.036681 containerd[1582]: time="2025-10-27T07:55:11.036655613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Oct 27 07:55:11.037643 containerd[1582]: time="2025-10-27T07:55:11.037611824Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:11.039472 containerd[1582]: time="2025-10-27T07:55:11.039436752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:11.040190 containerd[1582]: time="2025-10-27T07:55:11.040167681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 581.428642ms" Oct 27 07:55:11.040255 containerd[1582]: time="2025-10-27T07:55:11.040194864Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 27 07:55:11.040773 containerd[1582]: time="2025-10-27T07:55:11.040589706Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 27 07:55:13.766386 containerd[1582]: time="2025-10-27T07:55:13.766323811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:13.766877 containerd[1582]: time="2025-10-27T07:55:13.766829430Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Oct 27 07:55:13.767848 containerd[1582]: time="2025-10-27T07:55:13.767822258Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:13.770797 containerd[1582]: time="2025-10-27T07:55:13.770768834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:13.771931 containerd[1582]: time="2025-10-27T07:55:13.771907066Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.731281034s" Oct 27 07:55:13.771963 containerd[1582]: time="2025-10-27T07:55:13.771936818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 27 07:55:18.227760 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:18.227898 systemd[1]: kubelet.service: Consumed 146ms CPU time, 106.8M memory peak. Oct 27 07:55:18.229780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:18.251468 systemd[1]: Reload requested from client PID 2240 ('systemctl') (unit session-7.scope)... Oct 27 07:55:18.251484 systemd[1]: Reloading... Oct 27 07:55:18.321414 zram_generator::config[2290]: No configuration found. Oct 27 07:55:18.501442 systemd[1]: Reloading finished in 249 ms. Oct 27 07:55:18.549301 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:18.551182 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 07:55:18.551417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:18.551460 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.3M memory peak. Oct 27 07:55:18.552715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:18.684754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:18.688758 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 07:55:18.718439 kubelet[2331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 07:55:18.718439 kubelet[2331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 07:55:18.719008 kubelet[2331]: I1027 07:55:18.718951 2331 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 07:55:19.434164 kubelet[2331]: I1027 07:55:19.434103 2331 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 07:55:19.434164 kubelet[2331]: I1027 07:55:19.434139 2331 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 07:55:19.435281 kubelet[2331]: I1027 07:55:19.435247 2331 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 07:55:19.435281 kubelet[2331]: I1027 07:55:19.435269 2331 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 07:55:19.435565 kubelet[2331]: I1027 07:55:19.435536 2331 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 07:55:19.442599 kubelet[2331]: E1027 07:55:19.442569 2331 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 07:55:19.443350 kubelet[2331]: I1027 07:55:19.443318 2331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 07:55:19.447092 kubelet[2331]: I1027 07:55:19.447073 2331 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 07:55:19.449330 kubelet[2331]: I1027 07:55:19.449315 2331 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 07:55:19.449544 kubelet[2331]: I1027 07:55:19.449510 2331 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 07:55:19.449663 kubelet[2331]: I1027 07:55:19.449531 2331 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 07:55:19.449753 kubelet[2331]: I1027 07:55:19.449664 2331 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 07:55:19.449753 kubelet[2331]: I1027 07:55:19.449674 2331 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 07:55:19.449794 kubelet[2331]: I1027 07:55:19.449758 2331 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 07:55:19.451913 kubelet[2331]: I1027 07:55:19.451896 2331 state_mem.go:36] "Initialized new in-memory state store" Oct 27 07:55:19.453009 kubelet[2331]: I1027 07:55:19.452972 2331 kubelet.go:475] "Attempting to sync node with API server" Oct 27 07:55:19.453009 kubelet[2331]: I1027 07:55:19.452999 2331 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 07:55:19.453989 kubelet[2331]: E1027 07:55:19.453962 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 07:55:19.454028 kubelet[2331]: I1027 07:55:19.453998 2331 kubelet.go:387] "Adding apiserver pod source" Oct 27 07:55:19.454028 kubelet[2331]: I1027 07:55:19.454015 2331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 07:55:19.454573 kubelet[2331]: E1027 07:55:19.454513 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 07:55:19.454980 kubelet[2331]: I1027 07:55:19.454962 2331 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 07:55:19.455657 kubelet[2331]: I1027 07:55:19.455639 2331 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 07:55:19.455747 kubelet[2331]: I1027 07:55:19.455736 2331 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 07:55:19.455823 kubelet[2331]: W1027 07:55:19.455813 2331 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 07:55:19.457784 kubelet[2331]: I1027 07:55:19.457766 2331 server.go:1262] "Started kubelet" Oct 27 07:55:19.458246 kubelet[2331]: I1027 07:55:19.458181 2331 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 07:55:19.458293 kubelet[2331]: I1027 07:55:19.458252 2331 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 07:55:19.458551 kubelet[2331]: I1027 07:55:19.458519 2331 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 07:55:19.458710 kubelet[2331]: I1027 07:55:19.458684 2331 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 07:55:19.458895 kubelet[2331]: I1027 07:55:19.458879 2331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 07:55:19.459508 kubelet[2331]: I1027 07:55:19.459478 2331 server.go:310] "Adding debug handlers to kubelet server" Oct 27 07:55:19.460049 kubelet[2331]: I1027 07:55:19.460013 2331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 07:55:19.461185 kubelet[2331]: E1027 07:55:19.461151 2331 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 07:55:19.461185 kubelet[2331]: I1027 07:55:19.461189 2331 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 07:55:19.461556 kubelet[2331]: E1027 07:55:19.461516 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Oct 27 07:55:19.462083 kubelet[2331]: E1027 07:55:19.462036 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 07:55:19.462083 kubelet[2331]: E1027 07:55:19.461102 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187249f4abc1a979 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 07:55:19.457728889 +0000 UTC m=+0.766274601,LastTimestamp:2025-10-27 07:55:19.457728889 +0000 UTC m=+0.766274601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 07:55:19.462183 kubelet[2331]: I1027 07:55:19.462096 2331 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 07:55:19.462183 kubelet[2331]: I1027 07:55:19.462148 2331 reconciler.go:29] "Reconciler: start to sync state" Oct 27 07:55:19.462540 kubelet[2331]: I1027 07:55:19.462516 2331 factory.go:223] Registration of the systemd container factory successfully Oct 27 07:55:19.462599 kubelet[2331]: I1027 07:55:19.462589 2331 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 07:55:19.463511 kubelet[2331]: E1027 07:55:19.463479 2331 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 07:55:19.463728 kubelet[2331]: I1027 07:55:19.463698 2331 factory.go:223] Registration of the containerd container factory successfully Oct 27 07:55:19.475345 kubelet[2331]: I1027 07:55:19.475283 2331 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 07:55:19.475345 kubelet[2331]: I1027 07:55:19.475300 2331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 07:55:19.475345 kubelet[2331]: I1027 07:55:19.475320 2331 state_mem.go:36] "Initialized new in-memory state store" Oct 27 07:55:19.477204 kubelet[2331]: I1027 07:55:19.477186 2331 policy_none.go:49] "None policy: Start" Oct 27 07:55:19.477298 kubelet[2331]: I1027 07:55:19.477287 2331 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 07:55:19.477366 kubelet[2331]: I1027 07:55:19.477356 2331 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 07:55:19.478286 kubelet[2331]: I1027 07:55:19.478271 2331 policy_none.go:47] "Start" Oct 27 07:55:19.481424 kubelet[2331]: I1027 07:55:19.481384 2331 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 07:55:19.482836 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 07:55:19.483759 kubelet[2331]: I1027 07:55:19.483730 2331 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 07:55:19.483759 kubelet[2331]: I1027 07:55:19.483754 2331 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 07:55:19.483832 kubelet[2331]: I1027 07:55:19.483784 2331 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 07:55:19.483832 kubelet[2331]: E1027 07:55:19.483820 2331 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 07:55:19.485171 kubelet[2331]: E1027 07:55:19.485129 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 07:55:19.494870 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 07:55:19.497481 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 07:55:19.519137 kubelet[2331]: E1027 07:55:19.519100 2331 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 07:55:19.519498 kubelet[2331]: I1027 07:55:19.519477 2331 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 07:55:19.519617 kubelet[2331]: I1027 07:55:19.519582 2331 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 07:55:19.519856 kubelet[2331]: I1027 07:55:19.519840 2331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 07:55:19.520564 kubelet[2331]: E1027 07:55:19.520540 2331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 07:55:19.520770 kubelet[2331]: E1027 07:55:19.520749 2331 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 07:55:19.593895 systemd[1]: Created slice kubepods-burstable-podb657af1b14a64d8050659533a8f6a625.slice - libcontainer container kubepods-burstable-podb657af1b14a64d8050659533a8f6a625.slice. Oct 27 07:55:19.600999 kubelet[2331]: E1027 07:55:19.600952 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:19.603285 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 27 07:55:19.613444 kubelet[2331]: E1027 07:55:19.613413 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:19.615616 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 27 07:55:19.617168 kubelet[2331]: E1027 07:55:19.617007 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:19.621896 kubelet[2331]: I1027 07:55:19.621876 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 07:55:19.622443 kubelet[2331]: E1027 07:55:19.622418 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 27 07:55:19.662999 kubelet[2331]: E1027 07:55:19.662961 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Oct 27 07:55:19.763424 kubelet[2331]: I1027 07:55:19.763284 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:19.764075 kubelet[2331]: I1027 07:55:19.763322 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:19.764075 kubelet[2331]: I1027 07:55:19.763844 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:19.764075 kubelet[2331]: I1027 07:55:19.763887 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:19.764075 kubelet[2331]: I1027 07:55:19.763904 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:19.764075 kubelet[2331]: I1027 07:55:19.763918 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:19.764219 kubelet[2331]: I1027 07:55:19.763966 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:19.764219 kubelet[2331]: I1027 07:55:19.763981 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:19.764219 kubelet[2331]: I1027 07:55:19.763998 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:19.824815 kubelet[2331]: I1027 07:55:19.824567 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 07:55:19.825054 kubelet[2331]: E1027 07:55:19.825029 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 27 07:55:19.903473 kubelet[2331]: E1027 07:55:19.903440 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:19.904259 containerd[1582]: time="2025-10-27T07:55:19.904216983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b657af1b14a64d8050659533a8f6a625,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:19.916062 kubelet[2331]: E1027 07:55:19.915824 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:19.916390 containerd[1582]: time="2025-10-27T07:55:19.916362260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:19.919773 kubelet[2331]: E1027 07:55:19.919738 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:19.920280 containerd[1582]: time="2025-10-27T07:55:19.920250270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:20.063757 kubelet[2331]: E1027 07:55:20.063619 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Oct 27 07:55:20.226678 kubelet[2331]: I1027 07:55:20.226609 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 07:55:20.226974 kubelet[2331]: E1027 07:55:20.226943 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Oct 27 07:55:20.315768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019112357.mount: Deactivated successfully. Oct 27 07:55:20.322323 containerd[1582]: time="2025-10-27T07:55:20.321834908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 07:55:20.323309 containerd[1582]: time="2025-10-27T07:55:20.323257682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 07:55:20.324492 containerd[1582]: time="2025-10-27T07:55:20.324456638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 27 07:55:20.325058 containerd[1582]: time="2025-10-27T07:55:20.325030753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 07:55:20.326599 containerd[1582]: time="2025-10-27T07:55:20.326549066Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 07:55:20.327298 containerd[1582]: time="2025-10-27T07:55:20.327273284Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 07:55:20.327375 containerd[1582]: time="2025-10-27T07:55:20.327310501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 07:55:20.329168 containerd[1582]: time="2025-10-27T07:55:20.329129462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 07:55:20.330624 containerd[1582]: time="2025-10-27T07:55:20.330530649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 423.989878ms" Oct 27 07:55:20.332031 containerd[1582]: time="2025-10-27T07:55:20.331777335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 413.462815ms" Oct 27 07:55:20.333840 containerd[1582]: time="2025-10-27T07:55:20.333814038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 411.863845ms" Oct 27 07:55:20.349239 containerd[1582]: time="2025-10-27T07:55:20.349206473Z" level=info msg="connecting to shim 975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7" address="unix:///run/containerd/s/249f8467b68d7c5cd4f4a0a0797b456769a5e35e9fb8980bc8510ea9d1451787" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:20.357457 containerd[1582]: time="2025-10-27T07:55:20.357414405Z" level=info msg="connecting to shim d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4" address="unix:///run/containerd/s/cd6e1293cfd2e5eef551cef9a5207af3de45aca8d3572c18437faacc71ece82e" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:20.357833 containerd[1582]: time="2025-10-27T07:55:20.357803117Z" level=info msg="connecting to shim d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d" address="unix:///run/containerd/s/d46793dfecb8eede2def189153499c14b9fa25df1faf14757f1fde63cde77583" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:20.376577 systemd[1]: Started cri-containerd-975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7.scope - libcontainer container 975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7. Oct 27 07:55:20.377874 kubelet[2331]: E1027 07:55:20.377821 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 07:55:20.381046 systemd[1]: Started cri-containerd-d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d.scope - libcontainer container d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d. Oct 27 07:55:20.381928 systemd[1]: Started cri-containerd-d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4.scope - libcontainer container d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4. Oct 27 07:55:20.418944 containerd[1582]: time="2025-10-27T07:55:20.418903677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7\"" Oct 27 07:55:20.420997 kubelet[2331]: E1027 07:55:20.420963 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:20.422669 containerd[1582]: time="2025-10-27T07:55:20.422631982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4\"" Oct 27 07:55:20.423225 kubelet[2331]: E1027 07:55:20.423204 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:20.425429 containerd[1582]: time="2025-10-27T07:55:20.425400898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b657af1b14a64d8050659533a8f6a625,Namespace:kube-system,Attempt:0,} returns sandbox id \"d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d\"" Oct 27 07:55:20.425853 kubelet[2331]: E1027 07:55:20.425836 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:20.427125 containerd[1582]: time="2025-10-27T07:55:20.427087064Z" level=info msg="CreateContainer within sandbox \"975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 07:55:20.428151 containerd[1582]: time="2025-10-27T07:55:20.428101418Z" level=info msg="CreateContainer within sandbox \"d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 07:55:20.429988 containerd[1582]: time="2025-10-27T07:55:20.429935650Z" level=info msg="CreateContainer within sandbox \"d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 07:55:20.437311 containerd[1582]: time="2025-10-27T07:55:20.437247992Z" level=info msg="Container 54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:20.439191 containerd[1582]: time="2025-10-27T07:55:20.438516384Z" level=info msg="Container 12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:20.440133 containerd[1582]: time="2025-10-27T07:55:20.440090701Z" level=info msg="Container 76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:20.444428 containerd[1582]: time="2025-10-27T07:55:20.444395599Z" level=info msg="CreateContainer within sandbox \"d9d84b9612f8fd4e13a08738641f430d5b9d7ab59f37289d547f372524e429e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29\"" Oct 27 07:55:20.445036 containerd[1582]: time="2025-10-27T07:55:20.445013245Z" level=info msg="StartContainer for \"54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29\"" Oct 27 07:55:20.446115 containerd[1582]: time="2025-10-27T07:55:20.446069772Z" level=info msg="connecting to shim 54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29" address="unix:///run/containerd/s/cd6e1293cfd2e5eef551cef9a5207af3de45aca8d3572c18437faacc71ece82e" protocol=ttrpc version=3 Oct 27 07:55:20.448426 containerd[1582]: time="2025-10-27T07:55:20.448390734Z" level=info msg="CreateContainer within sandbox \"d38bdcbdfc9cda89467b8f355599d8c32554b7ce0c34e733fafbe2fcc381cf1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65\"" Oct 27 07:55:20.448838 containerd[1582]: time="2025-10-27T07:55:20.448782604Z" level=info msg="StartContainer for \"76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65\"" Oct 27 07:55:20.449398 containerd[1582]: time="2025-10-27T07:55:20.449328017Z" level=info msg="CreateContainer within sandbox \"975732c192d1e74d9425fb1979f24839e27b0ba312549c567af5b9e7182928b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0\"" Oct 27 07:55:20.449740 containerd[1582]: time="2025-10-27T07:55:20.449718049Z" level=info msg="StartContainer for \"12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0\"" Oct 27 07:55:20.450046 containerd[1582]: time="2025-10-27T07:55:20.450023014Z" level=info msg="connecting to shim 76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65" address="unix:///run/containerd/s/d46793dfecb8eede2def189153499c14b9fa25df1faf14757f1fde63cde77583" protocol=ttrpc version=3 Oct 27 07:55:20.450673 containerd[1582]: time="2025-10-27T07:55:20.450650814Z" level=info msg="connecting to shim 12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0" address="unix:///run/containerd/s/249f8467b68d7c5cd4f4a0a0797b456769a5e35e9fb8980bc8510ea9d1451787" protocol=ttrpc version=3 Oct 27 07:55:20.465523 systemd[1]: Started cri-containerd-54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29.scope - libcontainer container 54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29. Oct 27 07:55:20.470284 systemd[1]: Started cri-containerd-12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0.scope - libcontainer container 12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0. Oct 27 07:55:20.471097 systemd[1]: Started cri-containerd-76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65.scope - libcontainer container 76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65. Oct 27 07:55:20.515888 containerd[1582]: time="2025-10-27T07:55:20.515763858Z" level=info msg="StartContainer for \"54fa384b189f82edd5a75a26bed740bce79df9259a4b4246141c6b4c0dbc1a29\" returns successfully" Oct 27 07:55:20.534673 containerd[1582]: time="2025-10-27T07:55:20.534631240Z" level=info msg="StartContainer for \"12c4accc947fae0681d93264474cd474778fb190f34c53b32f48fc1e3e7ddeb0\" returns successfully" Oct 27 07:55:20.539435 containerd[1582]: time="2025-10-27T07:55:20.539388210Z" level=info msg="StartContainer for \"76f8b525571b08205994631aff4d3282dd4b9ada37a6d1691592c7dcdeebbc65\" returns successfully" Oct 27 07:55:20.594379 kubelet[2331]: E1027 07:55:20.593664 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 07:55:21.028959 kubelet[2331]: I1027 07:55:21.028420 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 07:55:21.510445 kubelet[2331]: E1027 07:55:21.510096 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:21.510445 kubelet[2331]: E1027 07:55:21.510218 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:21.512982 kubelet[2331]: E1027 07:55:21.512911 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:21.513064 kubelet[2331]: E1027 07:55:21.513007 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:21.515372 kubelet[2331]: E1027 07:55:21.515354 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 07:55:21.515480 kubelet[2331]: E1027 07:55:21.515467 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:22.300576 kubelet[2331]: E1027 07:55:22.300543 2331 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 07:55:22.397511 kubelet[2331]: I1027 07:55:22.397227 2331 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 07:55:22.415011 kubelet[2331]: E1027 07:55:22.414923 2331 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.187249f4abc1a979 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 07:55:19.457728889 +0000 UTC m=+0.766274601,LastTimestamp:2025-10-27 07:55:19.457728889 +0000 UTC m=+0.766274601,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 07:55:22.455547 kubelet[2331]: I1027 07:55:22.455075 2331 apiserver.go:52] "Watching apiserver" Oct 27 07:55:22.462076 kubelet[2331]: I1027 07:55:22.461747 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:22.463381 kubelet[2331]: I1027 07:55:22.462676 2331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 07:55:22.469404 kubelet[2331]: E1027 07:55:22.469303 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:22.469404 kubelet[2331]: I1027 07:55:22.469350 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:22.471713 kubelet[2331]: E1027 07:55:22.471383 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:22.471713 kubelet[2331]: I1027 07:55:22.471445 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:22.475010 kubelet[2331]: E1027 07:55:22.474886 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:22.518070 kubelet[2331]: I1027 07:55:22.518042 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:22.518291 kubelet[2331]: I1027 07:55:22.518135 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:22.518291 kubelet[2331]: I1027 07:55:22.518258 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:22.521363 kubelet[2331]: E1027 07:55:22.521104 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:22.521363 kubelet[2331]: E1027 07:55:22.521280 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:22.522364 kubelet[2331]: E1027 07:55:22.521927 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:22.522364 kubelet[2331]: E1027 07:55:22.521930 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:22.522364 kubelet[2331]: E1027 07:55:22.522071 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:22.522364 kubelet[2331]: E1027 07:55:22.522084 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:23.519228 kubelet[2331]: I1027 07:55:23.518940 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:23.519228 kubelet[2331]: I1027 07:55:23.519054 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:23.523289 kubelet[2331]: E1027 07:55:23.523248 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:23.526319 kubelet[2331]: E1027 07:55:23.526282 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:24.127352 systemd[1]: Reload requested from client PID 2623 ('systemctl') (unit session-7.scope)... Oct 27 07:55:24.127368 systemd[1]: Reloading... Oct 27 07:55:24.200368 zram_generator::config[2670]: No configuration found. Oct 27 07:55:24.367048 systemd[1]: Reloading finished in 239 ms. Oct 27 07:55:24.395949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:24.407188 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 07:55:24.407465 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:24.407530 systemd[1]: kubelet.service: Consumed 1.145s CPU time, 123.4M memory peak. Oct 27 07:55:24.409112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 07:55:24.537076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 07:55:24.547631 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 07:55:24.595859 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 07:55:24.595859 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 07:55:24.595859 kubelet[2709]: I1027 07:55:24.595542 2709 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 07:55:24.602915 kubelet[2709]: I1027 07:55:24.602863 2709 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 07:55:24.602915 kubelet[2709]: I1027 07:55:24.602903 2709 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 07:55:24.603031 kubelet[2709]: I1027 07:55:24.602932 2709 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 07:55:24.603031 kubelet[2709]: I1027 07:55:24.602938 2709 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 07:55:24.603265 kubelet[2709]: I1027 07:55:24.603230 2709 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 07:55:24.604840 kubelet[2709]: I1027 07:55:24.604819 2709 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 07:55:24.609252 kubelet[2709]: I1027 07:55:24.609002 2709 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 07:55:24.614728 kubelet[2709]: I1027 07:55:24.614708 2709 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 07:55:24.620163 kubelet[2709]: I1027 07:55:24.620139 2709 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 07:55:24.620350 kubelet[2709]: I1027 07:55:24.620312 2709 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 07:55:24.620569 kubelet[2709]: I1027 07:55:24.620415 2709 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 07:55:24.620662 kubelet[2709]: I1027 07:55:24.620572 2709 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 07:55:24.620662 kubelet[2709]: I1027 07:55:24.620581 2709 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 07:55:24.620662 kubelet[2709]: I1027 07:55:24.620606 2709 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 07:55:24.621719 kubelet[2709]: I1027 07:55:24.621700 2709 state_mem.go:36] "Initialized new in-memory state store" Oct 27 07:55:24.621871 kubelet[2709]: I1027 07:55:24.621861 2709 kubelet.go:475] "Attempting to sync node with API server" Oct 27 07:55:24.621905 kubelet[2709]: I1027 07:55:24.621877 2709 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 07:55:24.621905 kubelet[2709]: I1027 07:55:24.621901 2709 kubelet.go:387] "Adding apiserver pod source" Oct 27 07:55:24.621957 kubelet[2709]: I1027 07:55:24.621913 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 07:55:24.624575 kubelet[2709]: I1027 07:55:24.624534 2709 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 07:55:24.625485 kubelet[2709]: I1027 07:55:24.625439 2709 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 07:55:24.625568 kubelet[2709]: I1027 07:55:24.625474 2709 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 07:55:24.628097 kubelet[2709]: I1027 07:55:24.627815 2709 server.go:1262] "Started kubelet" Oct 27 07:55:24.629061 kubelet[2709]: I1027 07:55:24.629037 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 07:55:24.635309 kubelet[2709]: I1027 07:55:24.635175 2709 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 07:55:24.636883 kubelet[2709]: I1027 07:55:24.636846 2709 server.go:310] "Adding debug handlers to kubelet server" Oct 27 07:55:24.641641 kubelet[2709]: I1027 07:55:24.641589 2709 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 07:55:24.641709 kubelet[2709]: I1027 07:55:24.641658 2709 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 07:55:24.642232 kubelet[2709]: I1027 07:55:24.642206 2709 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 07:55:24.642301 kubelet[2709]: I1027 07:55:24.642287 2709 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 07:55:24.642464 kubelet[2709]: I1027 07:55:24.642449 2709 reconciler.go:29] "Reconciler: start to sync state" Oct 27 07:55:24.643661 kubelet[2709]: I1027 07:55:24.643620 2709 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 07:55:24.643899 kubelet[2709]: I1027 07:55:24.643874 2709 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 07:55:24.644117 kubelet[2709]: I1027 07:55:24.644101 2709 factory.go:223] Registration of the systemd container factory successfully Oct 27 07:55:24.644286 kubelet[2709]: I1027 07:55:24.644266 2709 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 07:55:24.647103 kubelet[2709]: E1027 07:55:24.647010 2709 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 07:55:24.647950 kubelet[2709]: I1027 07:55:24.647931 2709 factory.go:223] Registration of the containerd container factory successfully Oct 27 07:55:24.662244 kubelet[2709]: I1027 07:55:24.662213 2709 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 07:55:24.664482 kubelet[2709]: I1027 07:55:24.664298 2709 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 07:55:24.664541 kubelet[2709]: I1027 07:55:24.664490 2709 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 07:55:24.664541 kubelet[2709]: I1027 07:55:24.664518 2709 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 07:55:24.664598 kubelet[2709]: E1027 07:55:24.664571 2709 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 07:55:24.682110 kubelet[2709]: I1027 07:55:24.682085 2709 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 07:55:24.682110 kubelet[2709]: I1027 07:55:24.682104 2709 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 07:55:24.682238 kubelet[2709]: I1027 07:55:24.682125 2709 state_mem.go:36] "Initialized new in-memory state store" Oct 27 07:55:24.682265 kubelet[2709]: I1027 07:55:24.682245 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 07:55:24.682287 kubelet[2709]: I1027 07:55:24.682254 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 07:55:24.682287 kubelet[2709]: I1027 07:55:24.682277 2709 policy_none.go:49] "None policy: Start" Oct 27 07:55:24.682287 kubelet[2709]: I1027 07:55:24.682285 2709 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 07:55:24.682465 kubelet[2709]: I1027 07:55:24.682294 2709 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 07:55:24.683209 kubelet[2709]: I1027 07:55:24.683185 2709 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 27 07:55:24.683209 kubelet[2709]: I1027 07:55:24.683212 2709 policy_none.go:47] "Start" Oct 27 07:55:24.687873 kubelet[2709]: E1027 07:55:24.687736 2709 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 07:55:24.688030 kubelet[2709]: I1027 07:55:24.688014 2709 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 07:55:24.688128 kubelet[2709]: I1027 07:55:24.688095 2709 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 07:55:24.688605 kubelet[2709]: I1027 07:55:24.688492 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 07:55:24.689700 kubelet[2709]: E1027 07:55:24.689667 2709 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 07:55:24.765989 kubelet[2709]: I1027 07:55:24.765954 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:24.766370 kubelet[2709]: I1027 07:55:24.766242 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.766431 kubelet[2709]: I1027 07:55:24.766105 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:24.772761 kubelet[2709]: E1027 07:55:24.772704 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:24.773149 kubelet[2709]: E1027 07:55:24.773112 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:24.795605 kubelet[2709]: I1027 07:55:24.795429 2709 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 07:55:24.802665 kubelet[2709]: I1027 07:55:24.802637 2709 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 07:55:24.802774 kubelet[2709]: I1027 07:55:24.802729 2709 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 07:55:24.843817 kubelet[2709]: I1027 07:55:24.843768 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:24.843817 kubelet[2709]: I1027 07:55:24.843809 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.843956 kubelet[2709]: I1027 07:55:24.843831 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.843956 kubelet[2709]: I1027 07:55:24.843846 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:24.843956 kubelet[2709]: I1027 07:55:24.843863 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.843956 kubelet[2709]: I1027 07:55:24.843877 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.843956 kubelet[2709]: I1027 07:55:24.843894 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 07:55:24.844060 kubelet[2709]: I1027 07:55:24.843915 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:24.844060 kubelet[2709]: I1027 07:55:24.843929 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b657af1b14a64d8050659533a8f6a625-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b657af1b14a64d8050659533a8f6a625\") " pod="kube-system/kube-apiserver-localhost" Oct 27 07:55:25.073461 kubelet[2709]: E1027 07:55:25.073196 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.073461 kubelet[2709]: E1027 07:55:25.073221 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.073461 kubelet[2709]: E1027 07:55:25.073389 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.623352 kubelet[2709]: I1027 07:55:25.623284 2709 apiserver.go:52] "Watching apiserver" Oct 27 07:55:25.642923 kubelet[2709]: I1027 07:55:25.642881 2709 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 07:55:25.676498 kubelet[2709]: E1027 07:55:25.676471 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.676671 kubelet[2709]: E1027 07:55:25.676518 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.678361 kubelet[2709]: I1027 07:55:25.678321 2709 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:25.683030 kubelet[2709]: E1027 07:55:25.682972 2709 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 07:55:25.683205 kubelet[2709]: E1027 07:55:25.683169 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:25.713174 kubelet[2709]: I1027 07:55:25.713112 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.713095403 podStartE2EDuration="2.713095403s" podCreationTimestamp="2025-10-27 07:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:55:25.71264455 +0000 UTC m=+1.160764699" watchObservedRunningTime="2025-10-27 07:55:25.713095403 +0000 UTC m=+1.161215472" Oct 27 07:55:25.713565 kubelet[2709]: I1027 07:55:25.713454 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.713444729 podStartE2EDuration="2.713444729s" podCreationTimestamp="2025-10-27 07:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:55:25.70291049 +0000 UTC m=+1.151030599" watchObservedRunningTime="2025-10-27 07:55:25.713444729 +0000 UTC m=+1.161564798" Oct 27 07:55:25.725352 kubelet[2709]: I1027 07:55:25.724282 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.724268114 podStartE2EDuration="1.724268114s" podCreationTimestamp="2025-10-27 07:55:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:55:25.723567183 +0000 UTC m=+1.171687252" watchObservedRunningTime="2025-10-27 07:55:25.724268114 +0000 UTC m=+1.172388223" Oct 27 07:55:26.677689 kubelet[2709]: E1027 07:55:26.677640 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:26.678008 kubelet[2709]: E1027 07:55:26.677755 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:29.634831 kubelet[2709]: I1027 07:55:29.634773 2709 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 07:55:29.635299 containerd[1582]: time="2025-10-27T07:55:29.635260944Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 07:55:29.635575 kubelet[2709]: I1027 07:55:29.635551 2709 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 07:55:29.997253 kubelet[2709]: E1027 07:55:29.997146 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:30.438380 systemd[1]: Created slice kubepods-besteffort-pod1dff2497_6f15_4498_b9a7_00b294334942.slice - libcontainer container kubepods-besteffort-pod1dff2497_6f15_4498_b9a7_00b294334942.slice. Oct 27 07:55:30.480131 kubelet[2709]: I1027 07:55:30.480027 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dff2497-6f15-4498-b9a7-00b294334942-lib-modules\") pod \"kube-proxy-4qq62\" (UID: \"1dff2497-6f15-4498-b9a7-00b294334942\") " pod="kube-system/kube-proxy-4qq62" Oct 27 07:55:30.480131 kubelet[2709]: I1027 07:55:30.480088 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8t99\" (UniqueName: \"kubernetes.io/projected/1dff2497-6f15-4498-b9a7-00b294334942-kube-api-access-v8t99\") pod \"kube-proxy-4qq62\" (UID: \"1dff2497-6f15-4498-b9a7-00b294334942\") " pod="kube-system/kube-proxy-4qq62" Oct 27 07:55:30.480131 kubelet[2709]: I1027 07:55:30.480120 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dff2497-6f15-4498-b9a7-00b294334942-kube-proxy\") pod \"kube-proxy-4qq62\" (UID: \"1dff2497-6f15-4498-b9a7-00b294334942\") " pod="kube-system/kube-proxy-4qq62" Oct 27 07:55:30.480131 kubelet[2709]: I1027 07:55:30.480135 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dff2497-6f15-4498-b9a7-00b294334942-xtables-lock\") pod \"kube-proxy-4qq62\" (UID: \"1dff2497-6f15-4498-b9a7-00b294334942\") " pod="kube-system/kube-proxy-4qq62" Oct 27 07:55:30.752526 kubelet[2709]: E1027 07:55:30.752436 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:30.753342 containerd[1582]: time="2025-10-27T07:55:30.753257561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qq62,Uid:1dff2497-6f15-4498-b9a7-00b294334942,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:30.769843 containerd[1582]: time="2025-10-27T07:55:30.769796424Z" level=info msg="connecting to shim 749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40" address="unix:///run/containerd/s/efadf9ced921dab982435b8ffda7718f91b1e4b62b727aff6a4f4bef3337bdfd" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:30.798509 systemd[1]: Started cri-containerd-749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40.scope - libcontainer container 749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40. Oct 27 07:55:30.838249 containerd[1582]: time="2025-10-27T07:55:30.838191509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qq62,Uid:1dff2497-6f15-4498-b9a7-00b294334942,Namespace:kube-system,Attempt:0,} returns sandbox id \"749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40\"" Oct 27 07:55:30.840954 kubelet[2709]: E1027 07:55:30.840926 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:30.841689 systemd[1]: Created slice kubepods-besteffort-pod1753cccf_a082_4bdf_83be_b93600538644.slice - libcontainer container kubepods-besteffort-pod1753cccf_a082_4bdf_83be_b93600538644.slice. Oct 27 07:55:30.850526 containerd[1582]: time="2025-10-27T07:55:30.850488821Z" level=info msg="CreateContainer within sandbox \"749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 07:55:30.860916 containerd[1582]: time="2025-10-27T07:55:30.860867729Z" level=info msg="Container 2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:30.868180 containerd[1582]: time="2025-10-27T07:55:30.868144680Z" level=info msg="CreateContainer within sandbox \"749f2163bdee9f7f3a6ac8eb3afa187df56e8cb2e11ac6a9eab86eabe1b81e40\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e\"" Oct 27 07:55:30.868828 containerd[1582]: time="2025-10-27T07:55:30.868760615Z" level=info msg="StartContainer for \"2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e\"" Oct 27 07:55:30.870611 containerd[1582]: time="2025-10-27T07:55:30.870580023Z" level=info msg="connecting to shim 2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e" address="unix:///run/containerd/s/efadf9ced921dab982435b8ffda7718f91b1e4b62b727aff6a4f4bef3337bdfd" protocol=ttrpc version=3 Oct 27 07:55:30.882457 kubelet[2709]: I1027 07:55:30.882422 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1753cccf-a082-4bdf-83be-b93600538644-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-8n42k\" (UID: \"1753cccf-a082-4bdf-83be-b93600538644\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8n42k" Oct 27 07:55:30.883103 kubelet[2709]: I1027 07:55:30.883077 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jt2d\" (UniqueName: \"kubernetes.io/projected/1753cccf-a082-4bdf-83be-b93600538644-kube-api-access-7jt2d\") pod \"tigera-operator-65cdcdfd6d-8n42k\" (UID: \"1753cccf-a082-4bdf-83be-b93600538644\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-8n42k" Oct 27 07:55:30.893519 systemd[1]: Started cri-containerd-2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e.scope - libcontainer container 2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e. Oct 27 07:55:30.931648 containerd[1582]: time="2025-10-27T07:55:30.931546763Z" level=info msg="StartContainer for \"2c789c078c09786a4cfc4488808b01804683170276592a6663c19fa835db272e\" returns successfully" Oct 27 07:55:31.149014 containerd[1582]: time="2025-10-27T07:55:31.148964977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8n42k,Uid:1753cccf-a082-4bdf-83be-b93600538644,Namespace:tigera-operator,Attempt:0,}" Oct 27 07:55:31.173290 containerd[1582]: time="2025-10-27T07:55:31.173237747Z" level=info msg="connecting to shim 23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4" address="unix:///run/containerd/s/e92ad9163400342ce6f397e28ee7ab7c07fb5d4742f0fa894848e97ae9644662" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:31.198523 systemd[1]: Started cri-containerd-23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4.scope - libcontainer container 23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4. Oct 27 07:55:31.227901 containerd[1582]: time="2025-10-27T07:55:31.227804941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-8n42k,Uid:1753cccf-a082-4bdf-83be-b93600538644,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4\"" Oct 27 07:55:31.229967 containerd[1582]: time="2025-10-27T07:55:31.229873824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 07:55:31.686628 kubelet[2709]: E1027 07:55:31.686587 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:32.811429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175142844.mount: Deactivated successfully. Oct 27 07:55:33.162928 containerd[1582]: time="2025-10-27T07:55:33.162866105Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:33.163408 containerd[1582]: time="2025-10-27T07:55:33.163377008Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 27 07:55:33.164190 containerd[1582]: time="2025-10-27T07:55:33.164163622Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:33.166292 containerd[1582]: time="2025-10-27T07:55:33.166259832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:33.166932 containerd[1582]: time="2025-10-27T07:55:33.166897850Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.936964029s" Oct 27 07:55:33.166932 containerd[1582]: time="2025-10-27T07:55:33.166931249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 27 07:55:33.172093 containerd[1582]: time="2025-10-27T07:55:33.172026279Z" level=info msg="CreateContainer within sandbox \"23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 07:55:33.178463 containerd[1582]: time="2025-10-27T07:55:33.177908842Z" level=info msg="Container ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:33.186463 containerd[1582]: time="2025-10-27T07:55:33.186431716Z" level=info msg="CreateContainer within sandbox \"23cfb2f88015b1198abb3177bbf49117832ff6ff4dab5ade36344ae075ac5cb4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5\"" Oct 27 07:55:33.187941 containerd[1582]: time="2025-10-27T07:55:33.186947419Z" level=info msg="StartContainer for \"ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5\"" Oct 27 07:55:33.187941 containerd[1582]: time="2025-10-27T07:55:33.187682234Z" level=info msg="connecting to shim ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5" address="unix:///run/containerd/s/e92ad9163400342ce6f397e28ee7ab7c07fb5d4742f0fa894848e97ae9644662" protocol=ttrpc version=3 Oct 27 07:55:33.226717 systemd[1]: Started cri-containerd-ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5.scope - libcontainer container ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5. Oct 27 07:55:33.250080 containerd[1582]: time="2025-10-27T07:55:33.250026067Z" level=info msg="StartContainer for \"ceeb57d0df3e6d3adedca43b82322f669db2ea136c9521b91cffabddf3404cd5\" returns successfully" Oct 27 07:55:33.702258 kubelet[2709]: I1027 07:55:33.702155 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4qq62" podStartSLOduration=3.702139167 podStartE2EDuration="3.702139167s" podCreationTimestamp="2025-10-27 07:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:55:31.709976663 +0000 UTC m=+7.158096732" watchObservedRunningTime="2025-10-27 07:55:33.702139167 +0000 UTC m=+9.150259236" Oct 27 07:55:33.703098 kubelet[2709]: I1027 07:55:33.702751 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-8n42k" podStartSLOduration=1.7643251690000001 podStartE2EDuration="3.702738787s" podCreationTimestamp="2025-10-27 07:55:30 +0000 UTC" firstStartedPulling="2025-10-27 07:55:31.229299925 +0000 UTC m=+6.677419994" lastFinishedPulling="2025-10-27 07:55:33.167713543 +0000 UTC m=+8.615833612" observedRunningTime="2025-10-27 07:55:33.702564793 +0000 UTC m=+9.150684862" watchObservedRunningTime="2025-10-27 07:55:33.702738787 +0000 UTC m=+9.150859016" Oct 27 07:55:34.594734 kubelet[2709]: E1027 07:55:34.594692 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:34.698796 kubelet[2709]: E1027 07:55:34.698756 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:34.893528 kubelet[2709]: E1027 07:55:34.893132 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:35.705440 kubelet[2709]: E1027 07:55:35.705399 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:35.705968 kubelet[2709]: E1027 07:55:35.705945 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:38.503469 sudo[1778]: pam_unix(sudo:session): session closed for user root Oct 27 07:55:38.505651 sshd[1777]: Connection closed by 10.0.0.1 port 46716 Oct 27 07:55:38.508659 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 27 07:55:38.515129 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Oct 27 07:55:38.515684 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:46716.service: Deactivated successfully. Oct 27 07:55:38.519657 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 07:55:38.519972 systemd[1]: session-7.scope: Consumed 6.428s CPU time, 215.8M memory peak. Oct 27 07:55:38.521771 systemd-logind[1548]: Removed session 7. Oct 27 07:55:40.007776 kubelet[2709]: E1027 07:55:40.007458 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:40.710393 kubelet[2709]: E1027 07:55:40.710249 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:42.848432 update_engine[1550]: I20251027 07:55:42.848361 1550 update_attempter.cc:509] Updating boot flags... Oct 27 07:55:46.571561 systemd[1]: Created slice kubepods-besteffort-pod40c3a678_59cc_4d33_b9d9_4892c7e11b78.slice - libcontainer container kubepods-besteffort-pod40c3a678_59cc_4d33_b9d9_4892c7e11b78.slice. Oct 27 07:55:46.594800 kubelet[2709]: I1027 07:55:46.594741 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40c3a678-59cc-4d33-b9d9-4892c7e11b78-tigera-ca-bundle\") pod \"calico-typha-799fd7c6c-czjhf\" (UID: \"40c3a678-59cc-4d33-b9d9-4892c7e11b78\") " pod="calico-system/calico-typha-799fd7c6c-czjhf" Oct 27 07:55:46.594800 kubelet[2709]: I1027 07:55:46.594799 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/40c3a678-59cc-4d33-b9d9-4892c7e11b78-typha-certs\") pod \"calico-typha-799fd7c6c-czjhf\" (UID: \"40c3a678-59cc-4d33-b9d9-4892c7e11b78\") " pod="calico-system/calico-typha-799fd7c6c-czjhf" Oct 27 07:55:46.595144 kubelet[2709]: I1027 07:55:46.594821 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwhq\" (UniqueName: \"kubernetes.io/projected/40c3a678-59cc-4d33-b9d9-4892c7e11b78-kube-api-access-5qwhq\") pod \"calico-typha-799fd7c6c-czjhf\" (UID: \"40c3a678-59cc-4d33-b9d9-4892c7e11b78\") " pod="calico-system/calico-typha-799fd7c6c-czjhf" Oct 27 07:55:46.878372 kubelet[2709]: E1027 07:55:46.878261 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:46.878856 containerd[1582]: time="2025-10-27T07:55:46.878823039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799fd7c6c-czjhf,Uid:40c3a678-59cc-4d33-b9d9-4892c7e11b78,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:46.912911 containerd[1582]: time="2025-10-27T07:55:46.912825582Z" level=info msg="connecting to shim 0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59" address="unix:///run/containerd/s/eeb69038aea173d72878fd387af6dfd27f2b3c0b0943bc7acae1763e5442fa1a" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:46.950531 systemd[1]: Started cri-containerd-0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59.scope - libcontainer container 0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59. Oct 27 07:55:46.962578 systemd[1]: Created slice kubepods-besteffort-podaf6f96d5_3f8e_447e_b461_fa293555a420.slice - libcontainer container kubepods-besteffort-podaf6f96d5_3f8e_447e_b461_fa293555a420.slice. Oct 27 07:55:46.996560 kubelet[2709]: I1027 07:55:46.996519 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-lib-modules\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996560 kubelet[2709]: I1027 07:55:46.996554 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6f96d5-3f8e-447e-b461-fa293555a420-tigera-ca-bundle\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996560 kubelet[2709]: I1027 07:55:46.996570 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af6f96d5-3f8e-447e-b461-fa293555a420-node-certs\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996828 kubelet[2709]: I1027 07:55:46.996584 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-xtables-lock\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996828 kubelet[2709]: I1027 07:55:46.996602 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-flexvol-driver-host\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996828 kubelet[2709]: I1027 07:55:46.996620 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-cni-bin-dir\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.996828 kubelet[2709]: I1027 07:55:46.996632 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-policysync\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.997542 kubelet[2709]: I1027 07:55:46.997508 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-cni-log-dir\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.997618 kubelet[2709]: I1027 07:55:46.997601 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-cni-net-dir\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.997646 kubelet[2709]: I1027 07:55:46.997628 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-var-run-calico\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.997684 kubelet[2709]: I1027 07:55:46.997650 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af6f96d5-3f8e-447e-b461-fa293555a420-var-lib-calico\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:46.997713 kubelet[2709]: I1027 07:55:46.997691 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2ln\" (UniqueName: \"kubernetes.io/projected/af6f96d5-3f8e-447e-b461-fa293555a420-kube-api-access-7k2ln\") pod \"calico-node-brbsr\" (UID: \"af6f96d5-3f8e-447e-b461-fa293555a420\") " pod="calico-system/calico-node-brbsr" Oct 27 07:55:47.004971 containerd[1582]: time="2025-10-27T07:55:47.004936582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-799fd7c6c-czjhf,Uid:40c3a678-59cc-4d33-b9d9-4892c7e11b78,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59\"" Oct 27 07:55:47.005651 kubelet[2709]: E1027 07:55:47.005628 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:47.006693 containerd[1582]: time="2025-10-27T07:55:47.006649394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 07:55:47.102917 kubelet[2709]: E1027 07:55:47.102876 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.102917 kubelet[2709]: W1027 07:55:47.102900 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.103059 kubelet[2709]: E1027 07:55:47.102931 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.113830 kubelet[2709]: E1027 07:55:47.113792 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.113830 kubelet[2709]: W1027 07:55:47.113814 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.113830 kubelet[2709]: E1027 07:55:47.113830 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.150500 kubelet[2709]: E1027 07:55:47.150392 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:55:47.182509 kubelet[2709]: E1027 07:55:47.182473 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.182509 kubelet[2709]: W1027 07:55:47.182501 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.182655 kubelet[2709]: E1027 07:55:47.182523 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.184563 kubelet[2709]: E1027 07:55:47.184351 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.184563 kubelet[2709]: W1027 07:55:47.184377 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.184563 kubelet[2709]: E1027 07:55:47.184424 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.184786 kubelet[2709]: E1027 07:55:47.184760 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.184786 kubelet[2709]: W1027 07:55:47.184775 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.184786 kubelet[2709]: E1027 07:55:47.184787 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.185446 kubelet[2709]: E1027 07:55:47.185352 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.185446 kubelet[2709]: W1027 07:55:47.185376 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.185446 kubelet[2709]: E1027 07:55:47.185390 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.185871 kubelet[2709]: E1027 07:55:47.185809 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.186019 kubelet[2709]: W1027 07:55:47.185889 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.186019 kubelet[2709]: E1027 07:55:47.185904 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.186349 kubelet[2709]: E1027 07:55:47.186278 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.186349 kubelet[2709]: W1027 07:55:47.186292 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.186349 kubelet[2709]: E1027 07:55:47.186303 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.187024 kubelet[2709]: E1027 07:55:47.186988 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.187024 kubelet[2709]: W1027 07:55:47.187005 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.187024 kubelet[2709]: E1027 07:55:47.187017 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.187192 kubelet[2709]: E1027 07:55:47.187172 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.187192 kubelet[2709]: W1027 07:55:47.187183 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.187192 kubelet[2709]: E1027 07:55:47.187191 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.187376 kubelet[2709]: E1027 07:55:47.187350 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.187376 kubelet[2709]: W1027 07:55:47.187363 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.187670 kubelet[2709]: E1027 07:55:47.187383 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.188348 kubelet[2709]: E1027 07:55:47.188086 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.188348 kubelet[2709]: W1027 07:55:47.188104 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.188348 kubelet[2709]: E1027 07:55:47.188116 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.188348 kubelet[2709]: E1027 07:55:47.188276 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.188348 kubelet[2709]: W1027 07:55:47.188286 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.188348 kubelet[2709]: E1027 07:55:47.188295 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.190111 kubelet[2709]: E1027 07:55:47.189250 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.190111 kubelet[2709]: W1027 07:55:47.189269 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.190111 kubelet[2709]: E1027 07:55:47.189283 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.191240 kubelet[2709]: E1027 07:55:47.190740 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.191240 kubelet[2709]: W1027 07:55:47.190758 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.191240 kubelet[2709]: E1027 07:55:47.190780 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.191240 kubelet[2709]: E1027 07:55:47.190976 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.191240 kubelet[2709]: W1027 07:55:47.190986 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.191240 kubelet[2709]: E1027 07:55:47.191002 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.192877 kubelet[2709]: E1027 07:55:47.192447 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.192877 kubelet[2709]: W1027 07:55:47.192467 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.192877 kubelet[2709]: E1027 07:55:47.192497 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.192877 kubelet[2709]: E1027 07:55:47.192679 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.192877 kubelet[2709]: W1027 07:55:47.192692 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.192877 kubelet[2709]: E1027 07:55:47.192701 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.193158 kubelet[2709]: E1027 07:55:47.193127 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.193158 kubelet[2709]: W1027 07:55:47.193146 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.193158 kubelet[2709]: E1027 07:55:47.193159 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.194512 kubelet[2709]: E1027 07:55:47.194488 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.194512 kubelet[2709]: W1027 07:55:47.194510 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.194607 kubelet[2709]: E1027 07:55:47.194533 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.195010 kubelet[2709]: E1027 07:55:47.194983 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.195010 kubelet[2709]: W1027 07:55:47.195006 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.195101 kubelet[2709]: E1027 07:55:47.195027 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.195453 kubelet[2709]: E1027 07:55:47.195427 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.195453 kubelet[2709]: W1027 07:55:47.195445 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.195546 kubelet[2709]: E1027 07:55:47.195466 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.199817 kubelet[2709]: E1027 07:55:47.199784 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.199817 kubelet[2709]: W1027 07:55:47.199803 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.199817 kubelet[2709]: E1027 07:55:47.199818 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.199925 kubelet[2709]: I1027 07:55:47.199843 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cea2b2ed-e433-4f33-b71d-afa53cd98b5f-varrun\") pod \"csi-node-driver-7vvvz\" (UID: \"cea2b2ed-e433-4f33-b71d-afa53cd98b5f\") " pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:47.200188 kubelet[2709]: E1027 07:55:47.200019 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.200188 kubelet[2709]: W1027 07:55:47.200031 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.200188 kubelet[2709]: E1027 07:55:47.200041 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.200188 kubelet[2709]: I1027 07:55:47.200063 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea2b2ed-e433-4f33-b71d-afa53cd98b5f-kubelet-dir\") pod \"csi-node-driver-7vvvz\" (UID: \"cea2b2ed-e433-4f33-b71d-afa53cd98b5f\") " pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:47.200299 kubelet[2709]: E1027 07:55:47.200231 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.200299 kubelet[2709]: W1027 07:55:47.200240 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.200299 kubelet[2709]: E1027 07:55:47.200248 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.200299 kubelet[2709]: I1027 07:55:47.200268 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cea2b2ed-e433-4f33-b71d-afa53cd98b5f-socket-dir\") pod \"csi-node-driver-7vvvz\" (UID: \"cea2b2ed-e433-4f33-b71d-afa53cd98b5f\") " pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:47.200751 kubelet[2709]: E1027 07:55:47.200425 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.200751 kubelet[2709]: W1027 07:55:47.200439 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.200751 kubelet[2709]: E1027 07:55:47.200449 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.200751 kubelet[2709]: I1027 07:55:47.200468 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkg2b\" (UniqueName: \"kubernetes.io/projected/cea2b2ed-e433-4f33-b71d-afa53cd98b5f-kube-api-access-rkg2b\") pod \"csi-node-driver-7vvvz\" (UID: \"cea2b2ed-e433-4f33-b71d-afa53cd98b5f\") " pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:47.200751 kubelet[2709]: E1027 07:55:47.200661 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.200751 kubelet[2709]: W1027 07:55:47.200677 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.200751 kubelet[2709]: E1027 07:55:47.200690 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.200926 kubelet[2709]: E1027 07:55:47.200843 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.200926 kubelet[2709]: W1027 07:55:47.200851 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.200926 kubelet[2709]: E1027 07:55:47.200859 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201006 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.201364 kubelet[2709]: W1027 07:55:47.201017 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201025 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201136 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.201364 kubelet[2709]: W1027 07:55:47.201143 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201150 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201280 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.201364 kubelet[2709]: W1027 07:55:47.201287 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.201364 kubelet[2709]: E1027 07:55:47.201294 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201593 kubelet[2709]: E1027 07:55:47.201442 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.201593 kubelet[2709]: W1027 07:55:47.201450 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.201593 kubelet[2709]: E1027 07:55:47.201458 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201593 kubelet[2709]: E1027 07:55:47.201588 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.201593 kubelet[2709]: W1027 07:55:47.201595 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.201686 kubelet[2709]: E1027 07:55:47.201603 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.201686 kubelet[2709]: I1027 07:55:47.201626 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cea2b2ed-e433-4f33-b71d-afa53cd98b5f-registration-dir\") pod \"csi-node-driver-7vvvz\" (UID: \"cea2b2ed-e433-4f33-b71d-afa53cd98b5f\") " pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.201777 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.202359 kubelet[2709]: W1027 07:55:47.201788 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.201796 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.201929 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.202359 kubelet[2709]: W1027 07:55:47.201936 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.201943 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.202059 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.202359 kubelet[2709]: W1027 07:55:47.202066 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.202073 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.202359 kubelet[2709]: E1027 07:55:47.202193 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.202601 kubelet[2709]: W1027 07:55:47.202200 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.202601 kubelet[2709]: E1027 07:55:47.202207 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.270276 kubelet[2709]: E1027 07:55:47.270246 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:47.270814 containerd[1582]: time="2025-10-27T07:55:47.270778598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brbsr,Uid:af6f96d5-3f8e-447e-b461-fa293555a420,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:47.291551 containerd[1582]: time="2025-10-27T07:55:47.291470103Z" level=info msg="connecting to shim 91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0" address="unix:///run/containerd/s/4075a889ed56869df013a4b0e688533d941a5856e32407f75f474073cee38199" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:55:47.302907 kubelet[2709]: E1027 07:55:47.302880 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.302907 kubelet[2709]: W1027 07:55:47.302902 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.303011 kubelet[2709]: E1027 07:55:47.302923 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.303187 kubelet[2709]: E1027 07:55:47.303170 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.303216 kubelet[2709]: W1027 07:55:47.303185 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.303216 kubelet[2709]: E1027 07:55:47.303196 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.303559 kubelet[2709]: E1027 07:55:47.303545 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.303599 kubelet[2709]: W1027 07:55:47.303559 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.303599 kubelet[2709]: E1027 07:55:47.303572 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.303851 kubelet[2709]: E1027 07:55:47.303837 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.303851 kubelet[2709]: W1027 07:55:47.303850 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.303920 kubelet[2709]: E1027 07:55:47.303860 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.304024 kubelet[2709]: E1027 07:55:47.304013 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.304024 kubelet[2709]: W1027 07:55:47.304023 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.304073 kubelet[2709]: E1027 07:55:47.304032 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.304224 kubelet[2709]: E1027 07:55:47.304213 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.304257 kubelet[2709]: W1027 07:55:47.304226 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.304257 kubelet[2709]: E1027 07:55:47.304234 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.304472 kubelet[2709]: E1027 07:55:47.304457 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.304472 kubelet[2709]: W1027 07:55:47.304471 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.304537 kubelet[2709]: E1027 07:55:47.304482 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.304667 kubelet[2709]: E1027 07:55:47.304654 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.304667 kubelet[2709]: W1027 07:55:47.304668 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.304727 kubelet[2709]: E1027 07:55:47.304677 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.304847 kubelet[2709]: E1027 07:55:47.304834 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.304847 kubelet[2709]: W1027 07:55:47.304845 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.304926 kubelet[2709]: E1027 07:55:47.304854 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.305003 kubelet[2709]: E1027 07:55:47.304991 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.305003 kubelet[2709]: W1027 07:55:47.305001 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.305049 kubelet[2709]: E1027 07:55:47.305010 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.305168 kubelet[2709]: E1027 07:55:47.305156 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.305168 kubelet[2709]: W1027 07:55:47.305166 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.305212 kubelet[2709]: E1027 07:55:47.305174 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.305328 kubelet[2709]: E1027 07:55:47.305316 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.305328 kubelet[2709]: W1027 07:55:47.305327 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.305405 kubelet[2709]: E1027 07:55:47.305353 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.305571 kubelet[2709]: E1027 07:55:47.305558 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.305571 kubelet[2709]: W1027 07:55:47.305569 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.305619 kubelet[2709]: E1027 07:55:47.305578 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.305767 kubelet[2709]: E1027 07:55:47.305752 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.305794 kubelet[2709]: W1027 07:55:47.305767 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.305794 kubelet[2709]: E1027 07:55:47.305777 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.306557 kubelet[2709]: E1027 07:55:47.306539 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.306557 kubelet[2709]: W1027 07:55:47.306555 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.306638 kubelet[2709]: E1027 07:55:47.306568 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.306766 kubelet[2709]: E1027 07:55:47.306750 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.306766 kubelet[2709]: W1027 07:55:47.306765 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.306841 kubelet[2709]: E1027 07:55:47.306776 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.306970 kubelet[2709]: E1027 07:55:47.306956 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.306998 kubelet[2709]: W1027 07:55:47.306970 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.306998 kubelet[2709]: E1027 07:55:47.306981 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.307414 kubelet[2709]: E1027 07:55:47.307291 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.307414 kubelet[2709]: W1027 07:55:47.307408 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.307509 kubelet[2709]: E1027 07:55:47.307421 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.307733 kubelet[2709]: E1027 07:55:47.307655 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.307733 kubelet[2709]: W1027 07:55:47.307668 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.307733 kubelet[2709]: E1027 07:55:47.307730 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.308059 kubelet[2709]: E1027 07:55:47.308042 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.308059 kubelet[2709]: W1027 07:55:47.308057 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.308115 kubelet[2709]: E1027 07:55:47.308068 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.308595 kubelet[2709]: E1027 07:55:47.308577 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.308595 kubelet[2709]: W1027 07:55:47.308592 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.308675 kubelet[2709]: E1027 07:55:47.308604 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.308870 kubelet[2709]: E1027 07:55:47.308855 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.308870 kubelet[2709]: W1027 07:55:47.308869 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.308927 kubelet[2709]: E1027 07:55:47.308879 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.309075 kubelet[2709]: E1027 07:55:47.309063 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.309108 kubelet[2709]: W1027 07:55:47.309076 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.309108 kubelet[2709]: E1027 07:55:47.309086 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.309275 kubelet[2709]: E1027 07:55:47.309263 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.309275 kubelet[2709]: W1027 07:55:47.309275 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.309331 kubelet[2709]: E1027 07:55:47.309284 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.309682 kubelet[2709]: E1027 07:55:47.309556 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.309682 kubelet[2709]: W1027 07:55:47.309576 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.309682 kubelet[2709]: E1027 07:55:47.309591 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.312621 systemd[1]: Started cri-containerd-91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0.scope - libcontainer container 91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0. Oct 27 07:55:47.320517 kubelet[2709]: E1027 07:55:47.320495 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:47.320517 kubelet[2709]: W1027 07:55:47.320513 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:47.320630 kubelet[2709]: E1027 07:55:47.320530 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:47.362718 containerd[1582]: time="2025-10-27T07:55:47.362656431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-brbsr,Uid:af6f96d5-3f8e-447e-b461-fa293555a420,Namespace:calico-system,Attempt:0,} returns sandbox id \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\"" Oct 27 07:55:47.363822 kubelet[2709]: E1027 07:55:47.363790 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:48.136786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432617808.mount: Deactivated successfully. Oct 27 07:55:48.665307 kubelet[2709]: E1027 07:55:48.665199 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:55:48.669998 containerd[1582]: time="2025-10-27T07:55:48.669960478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:48.670474 containerd[1582]: time="2025-10-27T07:55:48.670446351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 27 07:55:48.671366 containerd[1582]: time="2025-10-27T07:55:48.671321657Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:48.673165 containerd[1582]: time="2025-10-27T07:55:48.673126469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:48.673908 containerd[1582]: time="2025-10-27T07:55:48.673867298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.667171984s" Oct 27 07:55:48.673955 containerd[1582]: time="2025-10-27T07:55:48.673909697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 27 07:55:48.675154 containerd[1582]: time="2025-10-27T07:55:48.675117118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 07:55:48.686584 containerd[1582]: time="2025-10-27T07:55:48.686531422Z" level=info msg="CreateContainer within sandbox \"0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 07:55:48.692070 containerd[1582]: time="2025-10-27T07:55:48.692029497Z" level=info msg="Container 3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:48.698857 containerd[1582]: time="2025-10-27T07:55:48.698816392Z" level=info msg="CreateContainer within sandbox \"0e68f854f3f70814d53f40ae68fa7939956f1fbaab1a72d58afde9fda261ae59\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79\"" Oct 27 07:55:48.699877 containerd[1582]: time="2025-10-27T07:55:48.699299145Z" level=info msg="StartContainer for \"3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79\"" Oct 27 07:55:48.700716 containerd[1582]: time="2025-10-27T07:55:48.700666404Z" level=info msg="connecting to shim 3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79" address="unix:///run/containerd/s/eeb69038aea173d72878fd387af6dfd27f2b3c0b0943bc7acae1763e5442fa1a" protocol=ttrpc version=3 Oct 27 07:55:48.721533 systemd[1]: Started cri-containerd-3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79.scope - libcontainer container 3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79. Oct 27 07:55:48.759575 containerd[1582]: time="2025-10-27T07:55:48.759536094Z" level=info msg="StartContainer for \"3958d30ca6535b7078319d9a888fa50e6cb21087e1b39084d1aa2bcca3f64f79\" returns successfully" Oct 27 07:55:49.738379 kubelet[2709]: E1027 07:55:49.738318 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:49.751216 kubelet[2709]: I1027 07:55:49.751162 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-799fd7c6c-czjhf" podStartSLOduration=2.082758843 podStartE2EDuration="3.751134048s" podCreationTimestamp="2025-10-27 07:55:46 +0000 UTC" firstStartedPulling="2025-10-27 07:55:47.006360559 +0000 UTC m=+22.454480628" lastFinishedPulling="2025-10-27 07:55:48.674735764 +0000 UTC m=+24.122855833" observedRunningTime="2025-10-27 07:55:49.750089183 +0000 UTC m=+25.198209252" watchObservedRunningTime="2025-10-27 07:55:49.751134048 +0000 UTC m=+25.199254117" Oct 27 07:55:49.778217 containerd[1582]: time="2025-10-27T07:55:49.778177489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:49.778890 containerd[1582]: time="2025-10-27T07:55:49.778727721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 27 07:55:49.779725 containerd[1582]: time="2025-10-27T07:55:49.779685306Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:49.781578 containerd[1582]: time="2025-10-27T07:55:49.781551239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:49.782050 containerd[1582]: time="2025-10-27T07:55:49.782015432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.106865554s" Oct 27 07:55:49.782102 containerd[1582]: time="2025-10-27T07:55:49.782050272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 27 07:55:49.797553 containerd[1582]: time="2025-10-27T07:55:49.797361206Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 07:55:49.809536 containerd[1582]: time="2025-10-27T07:55:49.809493746Z" level=info msg="Container ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:49.816250 kubelet[2709]: E1027 07:55:49.816219 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.816250 kubelet[2709]: W1027 07:55:49.816246 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.816391 kubelet[2709]: E1027 07:55:49.816266 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.816758 kubelet[2709]: E1027 07:55:49.816739 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.816794 kubelet[2709]: W1027 07:55:49.816780 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.816817 kubelet[2709]: E1027 07:55:49.816796 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.816984 kubelet[2709]: E1027 07:55:49.816970 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.816984 kubelet[2709]: W1027 07:55:49.816983 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.817046 kubelet[2709]: E1027 07:55:49.816993 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.817261 kubelet[2709]: E1027 07:55:49.817245 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.817261 kubelet[2709]: W1027 07:55:49.817261 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.817324 kubelet[2709]: E1027 07:55:49.817272 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.817484 kubelet[2709]: E1027 07:55:49.817472 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.817515 kubelet[2709]: W1027 07:55:49.817484 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.817515 kubelet[2709]: E1027 07:55:49.817494 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.818621 kubelet[2709]: E1027 07:55:49.818585 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.818621 kubelet[2709]: W1027 07:55:49.818600 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.818731 kubelet[2709]: E1027 07:55:49.818718 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.818964 kubelet[2709]: E1027 07:55:49.818952 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.819103 kubelet[2709]: W1027 07:55:49.819028 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.819103 kubelet[2709]: E1027 07:55:49.819044 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.819297 containerd[1582]: time="2025-10-27T07:55:49.819262802Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\"" Oct 27 07:55:49.819399 kubelet[2709]: E1027 07:55:49.819381 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.819491 kubelet[2709]: W1027 07:55:49.819480 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.819601 kubelet[2709]: E1027 07:55:49.819584 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.819631 containerd[1582]: time="2025-10-27T07:55:49.819611917Z" level=info msg="StartContainer for \"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\"" Oct 27 07:55:49.819891 kubelet[2709]: E1027 07:55:49.819879 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.820017 kubelet[2709]: W1027 07:55:49.819968 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.820017 kubelet[2709]: E1027 07:55:49.819983 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.820217 kubelet[2709]: E1027 07:55:49.820204 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.820342 kubelet[2709]: W1027 07:55:49.820271 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.820342 kubelet[2709]: E1027 07:55:49.820286 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.820664 kubelet[2709]: E1027 07:55:49.820598 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.820664 kubelet[2709]: W1027 07:55:49.820612 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.820664 kubelet[2709]: E1027 07:55:49.820622 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.820882 containerd[1582]: time="2025-10-27T07:55:49.820838939Z" level=info msg="connecting to shim ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae" address="unix:///run/containerd/s/4075a889ed56869df013a4b0e688533d941a5856e32407f75f474073cee38199" protocol=ttrpc version=3 Oct 27 07:55:49.821045 kubelet[2709]: E1027 07:55:49.820976 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.821045 kubelet[2709]: W1027 07:55:49.820988 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.821045 kubelet[2709]: E1027 07:55:49.820999 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.821320 kubelet[2709]: E1027 07:55:49.821299 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.822116 kubelet[2709]: W1027 07:55:49.821381 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.822116 kubelet[2709]: E1027 07:55:49.821395 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.822441 kubelet[2709]: E1027 07:55:49.822403 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.822441 kubelet[2709]: W1027 07:55:49.822418 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.822721 kubelet[2709]: E1027 07:55:49.822702 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.823041 kubelet[2709]: E1027 07:55:49.823024 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.823118 kubelet[2709]: W1027 07:55:49.823106 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.823169 kubelet[2709]: E1027 07:55:49.823159 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.823500 kubelet[2709]: E1027 07:55:49.823485 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.823574 kubelet[2709]: W1027 07:55:49.823560 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.823649 kubelet[2709]: E1027 07:55:49.823636 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.823892 kubelet[2709]: E1027 07:55:49.823880 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.824055 kubelet[2709]: W1027 07:55:49.823957 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.824055 kubelet[2709]: E1027 07:55:49.823973 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.824362 kubelet[2709]: E1027 07:55:49.824257 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.824362 kubelet[2709]: W1027 07:55:49.824269 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.824362 kubelet[2709]: E1027 07:55:49.824278 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.824534 kubelet[2709]: E1027 07:55:49.824515 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.824563 kubelet[2709]: W1027 07:55:49.824533 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.824563 kubelet[2709]: E1027 07:55:49.824544 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.824716 kubelet[2709]: E1027 07:55:49.824702 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.824716 kubelet[2709]: W1027 07:55:49.824714 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.824771 kubelet[2709]: E1027 07:55:49.824722 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.824867 kubelet[2709]: E1027 07:55:49.824848 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.824867 kubelet[2709]: W1027 07:55:49.824860 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.824867 kubelet[2709]: E1027 07:55:49.824867 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.825137 kubelet[2709]: E1027 07:55:49.825120 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.825137 kubelet[2709]: W1027 07:55:49.825137 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.825202 kubelet[2709]: E1027 07:55:49.825147 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.825413 kubelet[2709]: E1027 07:55:49.825398 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.825413 kubelet[2709]: W1027 07:55:49.825412 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.825463 kubelet[2709]: E1027 07:55:49.825426 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.825611 kubelet[2709]: E1027 07:55:49.825596 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.825611 kubelet[2709]: W1027 07:55:49.825610 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.825675 kubelet[2709]: E1027 07:55:49.825619 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.825770 kubelet[2709]: E1027 07:55:49.825754 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.825803 kubelet[2709]: W1027 07:55:49.825770 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.825803 kubelet[2709]: E1027 07:55:49.825791 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.825966 kubelet[2709]: E1027 07:55:49.825948 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.825966 kubelet[2709]: W1027 07:55:49.825961 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.826029 kubelet[2709]: E1027 07:55:49.825970 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.826266 kubelet[2709]: E1027 07:55:49.826246 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.826266 kubelet[2709]: W1027 07:55:49.826262 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.826354 kubelet[2709]: E1027 07:55:49.826274 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.827205 kubelet[2709]: E1027 07:55:49.827181 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.827205 kubelet[2709]: W1027 07:55:49.827196 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.827205 kubelet[2709]: E1027 07:55:49.827207 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.827646 kubelet[2709]: E1027 07:55:49.827624 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.827646 kubelet[2709]: W1027 07:55:49.827639 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.827646 kubelet[2709]: E1027 07:55:49.827649 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.828544 kubelet[2709]: E1027 07:55:49.827834 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.828544 kubelet[2709]: W1027 07:55:49.827846 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.828544 kubelet[2709]: E1027 07:55:49.827858 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.828544 kubelet[2709]: E1027 07:55:49.828057 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.828544 kubelet[2709]: W1027 07:55:49.828065 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.828544 kubelet[2709]: E1027 07:55:49.828073 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.829265 kubelet[2709]: E1027 07:55:49.829117 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.829365 kubelet[2709]: W1027 07:55:49.829283 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.829365 kubelet[2709]: E1027 07:55:49.829297 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.834442 kubelet[2709]: E1027 07:55:49.834377 2709 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 07:55:49.834442 kubelet[2709]: W1027 07:55:49.834394 2709 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 07:55:49.834442 kubelet[2709]: E1027 07:55:49.834406 2709 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 07:55:49.843493 systemd[1]: Started cri-containerd-ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae.scope - libcontainer container ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae. Oct 27 07:55:49.898144 containerd[1582]: time="2025-10-27T07:55:49.898101678Z" level=info msg="StartContainer for \"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\" returns successfully" Oct 27 07:55:49.911437 systemd[1]: cri-containerd-ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae.scope: Deactivated successfully. Oct 27 07:55:49.929163 containerd[1582]: time="2025-10-27T07:55:49.929113780Z" level=info msg="received exit event container_id:\"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\" id:\"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\" pid:3435 exited_at:{seconds:1761551749 nanos:925889788}" Oct 27 07:55:49.929380 containerd[1582]: time="2025-10-27T07:55:49.929212459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\" id:\"ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae\" pid:3435 exited_at:{seconds:1761551749 nanos:925889788}" Oct 27 07:55:49.966481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba4a17a49bea745cf2379b3e36287757fec831a373f923788d8ccdb685d411ae-rootfs.mount: Deactivated successfully. Oct 27 07:55:50.665521 kubelet[2709]: E1027 07:55:50.665482 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:55:50.746583 kubelet[2709]: I1027 07:55:50.746467 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 07:55:50.746887 kubelet[2709]: E1027 07:55:50.746710 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:50.747162 kubelet[2709]: E1027 07:55:50.747037 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:50.747958 containerd[1582]: time="2025-10-27T07:55:50.747653899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 07:55:52.665295 kubelet[2709]: E1027 07:55:52.665235 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:55:53.719139 containerd[1582]: time="2025-10-27T07:55:53.718485570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:53.719139 containerd[1582]: time="2025-10-27T07:55:53.719101482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 27 07:55:53.719844 containerd[1582]: time="2025-10-27T07:55:53.719816473Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:53.722347 containerd[1582]: time="2025-10-27T07:55:53.722032526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:53.722982 containerd[1582]: time="2025-10-27T07:55:53.722931234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.975242455s" Oct 27 07:55:53.722982 containerd[1582]: time="2025-10-27T07:55:53.722968234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 27 07:55:53.727152 containerd[1582]: time="2025-10-27T07:55:53.726816266Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 07:55:53.738250 containerd[1582]: time="2025-10-27T07:55:53.738212365Z" level=info msg="Container 88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:53.748980 containerd[1582]: time="2025-10-27T07:55:53.748943272Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\"" Oct 27 07:55:53.749551 containerd[1582]: time="2025-10-27T07:55:53.749508825Z" level=info msg="StartContainer for \"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\"" Oct 27 07:55:53.751511 containerd[1582]: time="2025-10-27T07:55:53.751483440Z" level=info msg="connecting to shim 88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102" address="unix:///run/containerd/s/4075a889ed56869df013a4b0e688533d941a5856e32407f75f474073cee38199" protocol=ttrpc version=3 Oct 27 07:55:53.777537 systemd[1]: Started cri-containerd-88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102.scope - libcontainer container 88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102. Oct 27 07:55:53.809690 containerd[1582]: time="2025-10-27T07:55:53.809648318Z" level=info msg="StartContainer for \"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\" returns successfully" Oct 27 07:55:54.318678 containerd[1582]: time="2025-10-27T07:55:54.318619962Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 07:55:54.322114 systemd[1]: cri-containerd-88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102.scope: Deactivated successfully. Oct 27 07:55:54.322422 systemd[1]: cri-containerd-88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102.scope: Consumed 441ms CPU time, 178.1M memory peak, 2.1M read from disk, 165.9M written to disk. Oct 27 07:55:54.337382 containerd[1582]: time="2025-10-27T07:55:54.336940823Z" level=info msg="received exit event container_id:\"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\" id:\"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\" pid:3495 exited_at:{seconds:1761551754 nanos:336735266}" Oct 27 07:55:54.337382 containerd[1582]: time="2025-10-27T07:55:54.337091542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\" id:\"88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102\" pid:3495 exited_at:{seconds:1761551754 nanos:336735266}" Oct 27 07:55:54.341006 kubelet[2709]: I1027 07:55:54.340970 2709 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 27 07:55:54.383424 systemd[1]: Created slice kubepods-burstable-pod18d6a388_d738_473e_98de_05b1bf50cdfc.slice - libcontainer container kubepods-burstable-pod18d6a388_d738_473e_98de_05b1bf50cdfc.slice. Oct 27 07:55:54.389169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88b7ec9eb32f8c03cf4040d5a3899ee8a372cdc1e75a13c7dc59c2ddca4fb102-rootfs.mount: Deactivated successfully. Oct 27 07:55:54.396239 systemd[1]: Created slice kubepods-burstable-pod01432c6e_434d_4261_a178_18e07a695baf.slice - libcontainer container kubepods-burstable-pod01432c6e_434d_4261_a178_18e07a695baf.slice. Oct 27 07:55:54.414825 systemd[1]: Created slice kubepods-besteffort-podd6332fd8_67a0_4328_a949_abb03ff66ef6.slice - libcontainer container kubepods-besteffort-podd6332fd8_67a0_4328_a949_abb03ff66ef6.slice. Oct 27 07:55:54.449532 systemd[1]: Created slice kubepods-besteffort-pod9fb2f423_4788_4c5c_9c0e_a84c0b4825df.slice - libcontainer container kubepods-besteffort-pod9fb2f423_4788_4c5c_9c0e_a84c0b4825df.slice. Oct 27 07:55:54.456055 systemd[1]: Created slice kubepods-besteffort-pod656cce1b_d114_4468_aa23_f4cc0ed0fc43.slice - libcontainer container kubepods-besteffort-pod656cce1b_d114_4468_aa23_f4cc0ed0fc43.slice. Oct 27 07:55:54.458199 kubelet[2709]: I1027 07:55:54.458158 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9fb2f423-4788-4c5c-9c0e-a84c0b4825df-calico-apiserver-certs\") pod \"calico-apiserver-557595484f-vw9q8\" (UID: \"9fb2f423-4788-4c5c-9c0e-a84c0b4825df\") " pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" Oct 27 07:55:54.458199 kubelet[2709]: I1027 07:55:54.458194 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt4k8\" (UniqueName: \"kubernetes.io/projected/9fb2f423-4788-4c5c-9c0e-a84c0b4825df-kube-api-access-gt4k8\") pod \"calico-apiserver-557595484f-vw9q8\" (UID: \"9fb2f423-4788-4c5c-9c0e-a84c0b4825df\") " pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" Oct 27 07:55:54.458617 kubelet[2709]: I1027 07:55:54.458213 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxz6d\" (UniqueName: \"kubernetes.io/projected/d6332fd8-67a0-4328-a949-abb03ff66ef6-kube-api-access-cxz6d\") pod \"calico-apiserver-68b45bdbf4-4zxr4\" (UID: \"d6332fd8-67a0-4328-a949-abb03ff66ef6\") " pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" Oct 27 07:55:54.458617 kubelet[2709]: I1027 07:55:54.458324 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/656cce1b-d114-4468-aa23-f4cc0ed0fc43-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-sfpcs\" (UID: \"656cce1b-d114-4468-aa23-f4cc0ed0fc43\") " pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.458617 kubelet[2709]: I1027 07:55:54.458376 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656cce1b-d114-4468-aa23-f4cc0ed0fc43-config\") pod \"goldmane-7c778bb748-sfpcs\" (UID: \"656cce1b-d114-4468-aa23-f4cc0ed0fc43\") " pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.458617 kubelet[2709]: I1027 07:55:54.458394 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/656cce1b-d114-4468-aa23-f4cc0ed0fc43-goldmane-key-pair\") pod \"goldmane-7c778bb748-sfpcs\" (UID: \"656cce1b-d114-4468-aa23-f4cc0ed0fc43\") " pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.458617 kubelet[2709]: I1027 07:55:54.458413 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18d6a388-d738-473e-98de-05b1bf50cdfc-config-volume\") pod \"coredns-66bc5c9577-nrs4n\" (UID: \"18d6a388-d738-473e-98de-05b1bf50cdfc\") " pod="kube-system/coredns-66bc5c9577-nrs4n" Oct 27 07:55:54.458751 kubelet[2709]: I1027 07:55:54.458430 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hx86\" (UniqueName: \"kubernetes.io/projected/18d6a388-d738-473e-98de-05b1bf50cdfc-kube-api-access-2hx86\") pod \"coredns-66bc5c9577-nrs4n\" (UID: \"18d6a388-d738-473e-98de-05b1bf50cdfc\") " pod="kube-system/coredns-66bc5c9577-nrs4n" Oct 27 07:55:54.458751 kubelet[2709]: I1027 07:55:54.458580 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpstw\" (UniqueName: \"kubernetes.io/projected/01432c6e-434d-4261-a178-18e07a695baf-kube-api-access-mpstw\") pod \"coredns-66bc5c9577-wltmj\" (UID: \"01432c6e-434d-4261-a178-18e07a695baf\") " pod="kube-system/coredns-66bc5c9577-wltmj" Oct 27 07:55:54.458751 kubelet[2709]: I1027 07:55:54.458605 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6332fd8-67a0-4328-a949-abb03ff66ef6-calico-apiserver-certs\") pod \"calico-apiserver-68b45bdbf4-4zxr4\" (UID: \"d6332fd8-67a0-4328-a949-abb03ff66ef6\") " pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" Oct 27 07:55:54.458751 kubelet[2709]: I1027 07:55:54.458671 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7mw9\" (UniqueName: \"kubernetes.io/projected/656cce1b-d114-4468-aa23-f4cc0ed0fc43-kube-api-access-j7mw9\") pod \"goldmane-7c778bb748-sfpcs\" (UID: \"656cce1b-d114-4468-aa23-f4cc0ed0fc43\") " pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.458955 kubelet[2709]: I1027 07:55:54.458907 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01432c6e-434d-4261-a178-18e07a695baf-config-volume\") pod \"coredns-66bc5c9577-wltmj\" (UID: \"01432c6e-434d-4261-a178-18e07a695baf\") " pod="kube-system/coredns-66bc5c9577-wltmj" Oct 27 07:55:54.467954 systemd[1]: Created slice kubepods-besteffort-podc70e5e6e_faf5_4f92_89ce_19004e63b56f.slice - libcontainer container kubepods-besteffort-podc70e5e6e_faf5_4f92_89ce_19004e63b56f.slice. Oct 27 07:55:54.475263 systemd[1]: Created slice kubepods-besteffort-pod754e12e0_7fe8_44bd_b549_51099302c4a2.slice - libcontainer container kubepods-besteffort-pod754e12e0_7fe8_44bd_b549_51099302c4a2.slice. Oct 27 07:55:54.483776 systemd[1]: Created slice kubepods-besteffort-pod8197235e_fb1d_4c19_95db_1c579409d474.slice - libcontainer container kubepods-besteffort-pod8197235e_fb1d_4c19_95db_1c579409d474.slice. Oct 27 07:55:54.559801 kubelet[2709]: I1027 07:55:54.559733 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-backend-key-pair\") pod \"whisker-684d78f548-v9trk\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " pod="calico-system/whisker-684d78f548-v9trk" Oct 27 07:55:54.559801 kubelet[2709]: I1027 07:55:54.559807 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-ca-bundle\") pod \"whisker-684d78f548-v9trk\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " pod="calico-system/whisker-684d78f548-v9trk" Oct 27 07:55:54.560004 kubelet[2709]: I1027 07:55:54.559824 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8197235e-fb1d-4c19-95db-1c579409d474-calico-apiserver-certs\") pod \"calico-apiserver-68b45bdbf4-hbjcq\" (UID: \"8197235e-fb1d-4c19-95db-1c579409d474\") " pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" Oct 27 07:55:54.560004 kubelet[2709]: I1027 07:55:54.559901 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c70e5e6e-faf5-4f92-89ce-19004e63b56f-tigera-ca-bundle\") pod \"calico-kube-controllers-78d9d46969-qr569\" (UID: \"c70e5e6e-faf5-4f92-89ce-19004e63b56f\") " pod="calico-system/calico-kube-controllers-78d9d46969-qr569" Oct 27 07:55:54.560004 kubelet[2709]: I1027 07:55:54.559920 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9qz\" (UniqueName: \"kubernetes.io/projected/754e12e0-7fe8-44bd-b549-51099302c4a2-kube-api-access-gr9qz\") pod \"whisker-684d78f548-v9trk\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " pod="calico-system/whisker-684d78f548-v9trk" Oct 27 07:55:54.560004 kubelet[2709]: I1027 07:55:54.559938 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2ns\" (UniqueName: \"kubernetes.io/projected/8197235e-fb1d-4c19-95db-1c579409d474-kube-api-access-4k2ns\") pod \"calico-apiserver-68b45bdbf4-hbjcq\" (UID: \"8197235e-fb1d-4c19-95db-1c579409d474\") " pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" Oct 27 07:55:54.560004 kubelet[2709]: I1027 07:55:54.559984 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvrf\" (UniqueName: \"kubernetes.io/projected/c70e5e6e-faf5-4f92-89ce-19004e63b56f-kube-api-access-wrvrf\") pod \"calico-kube-controllers-78d9d46969-qr569\" (UID: \"c70e5e6e-faf5-4f92-89ce-19004e63b56f\") " pod="calico-system/calico-kube-controllers-78d9d46969-qr569" Oct 27 07:55:54.675965 systemd[1]: Created slice kubepods-besteffort-podcea2b2ed_e433_4f33_b71d_afa53cd98b5f.slice - libcontainer container kubepods-besteffort-podcea2b2ed_e433_4f33_b71d_afa53cd98b5f.slice. Oct 27 07:55:54.680600 containerd[1582]: time="2025-10-27T07:55:54.680485532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vvvz,Uid:cea2b2ed-e433-4f33-b71d-afa53cd98b5f,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:54.734185 containerd[1582]: time="2025-10-27T07:55:54.734031455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-4zxr4,Uid:d6332fd8-67a0-4328-a949-abb03ff66ef6,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:55:54.734542 kubelet[2709]: E1027 07:55:54.734394 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:54.735314 containerd[1582]: time="2025-10-27T07:55:54.735284160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrs4n,Uid:18d6a388-d738-473e-98de-05b1bf50cdfc,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:54.736567 kubelet[2709]: E1027 07:55:54.736539 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:54.737147 containerd[1582]: time="2025-10-27T07:55:54.737116378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wltmj,Uid:01432c6e-434d-4261-a178-18e07a695baf,Namespace:kube-system,Attempt:0,}" Oct 27 07:55:54.758064 containerd[1582]: time="2025-10-27T07:55:54.757808172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557595484f-vw9q8,Uid:9fb2f423-4788-4c5c-9c0e-a84c0b4825df,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:55:54.766513 containerd[1582]: time="2025-10-27T07:55:54.766467948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sfpcs,Uid:656cce1b-d114-4468-aa23-f4cc0ed0fc43,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:54.772289 kubelet[2709]: E1027 07:55:54.772247 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:54.775616 containerd[1582]: time="2025-10-27T07:55:54.775123245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d9d46969-qr569,Uid:c70e5e6e-faf5-4f92-89ce-19004e63b56f,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:54.776164 containerd[1582]: time="2025-10-27T07:55:54.776110434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 07:55:54.793605 containerd[1582]: time="2025-10-27T07:55:54.793550906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-hbjcq,Uid:8197235e-fb1d-4c19-95db-1c579409d474,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:55:54.793750 containerd[1582]: time="2025-10-27T07:55:54.793725104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-684d78f548-v9trk,Uid:754e12e0-7fe8-44bd-b549-51099302c4a2,Namespace:calico-system,Attempt:0,}" Oct 27 07:55:54.848143 containerd[1582]: time="2025-10-27T07:55:54.848094336Z" level=error msg="Failed to destroy network for sandbox \"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.857185 containerd[1582]: time="2025-10-27T07:55:54.857135189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wltmj,Uid:01432c6e-434d-4261-a178-18e07a695baf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.857643 kubelet[2709]: E1027 07:55:54.857593 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.857709 kubelet[2709]: E1027 07:55:54.857671 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wltmj" Oct 27 07:55:54.857709 kubelet[2709]: E1027 07:55:54.857692 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wltmj" Oct 27 07:55:54.857781 kubelet[2709]: E1027 07:55:54.857750 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wltmj_kube-system(01432c6e-434d-4261-a178-18e07a695baf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wltmj_kube-system(01432c6e-434d-4261-a178-18e07a695baf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b42566f7f85a4a7a9c47f959059315f09751b9a8cb28fdf6d11d2090e897b08e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wltmj" podUID="01432c6e-434d-4261-a178-18e07a695baf" Oct 27 07:55:54.870599 containerd[1582]: time="2025-10-27T07:55:54.870554549Z" level=error msg="Failed to destroy network for sandbox \"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.870881 containerd[1582]: time="2025-10-27T07:55:54.870767346Z" level=error msg="Failed to destroy network for sandbox \"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.872230 containerd[1582]: time="2025-10-27T07:55:54.872121050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-4zxr4,Uid:d6332fd8-67a0-4328-a949-abb03ff66ef6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.872759 kubelet[2709]: E1027 07:55:54.872689 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.872759 kubelet[2709]: E1027 07:55:54.872748 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" Oct 27 07:55:54.872864 kubelet[2709]: E1027 07:55:54.872766 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" Oct 27 07:55:54.872864 kubelet[2709]: E1027 07:55:54.872821 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b45bdbf4-4zxr4_calico-apiserver(d6332fd8-67a0-4328-a949-abb03ff66ef6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b45bdbf4-4zxr4_calico-apiserver(d6332fd8-67a0-4328-a949-abb03ff66ef6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dd3d0fee3fc39f321d739fe14fc1a01519e16debbca040b11ca1c065e409ef1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6" Oct 27 07:55:54.873547 containerd[1582]: time="2025-10-27T07:55:54.873512154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vvvz,Uid:cea2b2ed-e433-4f33-b71d-afa53cd98b5f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.873896 kubelet[2709]: E1027 07:55:54.873825 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.873896 kubelet[2709]: E1027 07:55:54.873901 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:54.874017 kubelet[2709]: E1027 07:55:54.873936 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7vvvz" Oct 27 07:55:54.874017 kubelet[2709]: E1027 07:55:54.874001 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeececca0b960707a420b4fd065e8205e73623560183787fb7b83c76a9874499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:55:54.892513 containerd[1582]: time="2025-10-27T07:55:54.892466968Z" level=error msg="Failed to destroy network for sandbox \"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.893874 containerd[1582]: time="2025-10-27T07:55:54.893759633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrs4n,Uid:18d6a388-d738-473e-98de-05b1bf50cdfc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.894201 kubelet[2709]: E1027 07:55:54.894130 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.894279 kubelet[2709]: E1027 07:55:54.894217 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nrs4n" Oct 27 07:55:54.894279 kubelet[2709]: E1027 07:55:54.894243 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nrs4n" Oct 27 07:55:54.894454 kubelet[2709]: E1027 07:55:54.894320 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nrs4n_kube-system(18d6a388-d738-473e-98de-05b1bf50cdfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nrs4n_kube-system(18d6a388-d738-473e-98de-05b1bf50cdfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3438582272748d51b75612a20b9a1a0515d2ce2cdb5f969447f9b1b851a4d16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nrs4n" podUID="18d6a388-d738-473e-98de-05b1bf50cdfc" Oct 27 07:55:54.910545 containerd[1582]: time="2025-10-27T07:55:54.910493353Z" level=error msg="Failed to destroy network for sandbox \"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.911055 containerd[1582]: time="2025-10-27T07:55:54.910979908Z" level=error msg="Failed to destroy network for sandbox \"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.912619 containerd[1582]: time="2025-10-27T07:55:54.912298812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-684d78f548-v9trk,Uid:754e12e0-7fe8-44bd-b549-51099302c4a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.913384 kubelet[2709]: E1027 07:55:54.913270 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.913452 kubelet[2709]: E1027 07:55:54.913430 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-684d78f548-v9trk" Oct 27 07:55:54.913497 kubelet[2709]: E1027 07:55:54.913454 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-684d78f548-v9trk" Oct 27 07:55:54.913546 kubelet[2709]: E1027 07:55:54.913513 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-684d78f548-v9trk_calico-system(754e12e0-7fe8-44bd-b549-51099302c4a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-684d78f548-v9trk_calico-system(754e12e0-7fe8-44bd-b549-51099302c4a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1817915a6d29557139c9d71bc43f0d2ea23c1fac45ef1f240c0305870d983681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-684d78f548-v9trk" podUID="754e12e0-7fe8-44bd-b549-51099302c4a2" Oct 27 07:55:54.913953 containerd[1582]: time="2025-10-27T07:55:54.913920193Z" level=error msg="Failed to destroy network for sandbox \"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.914303 containerd[1582]: time="2025-10-27T07:55:54.914268788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d9d46969-qr569,Uid:c70e5e6e-faf5-4f92-89ce-19004e63b56f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.914537 kubelet[2709]: E1027 07:55:54.914493 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.914582 kubelet[2709]: E1027 07:55:54.914548 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" Oct 27 07:55:54.914582 kubelet[2709]: E1027 07:55:54.914566 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" Oct 27 07:55:54.914630 kubelet[2709]: E1027 07:55:54.914598 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78d9d46969-qr569_calico-system(c70e5e6e-faf5-4f92-89ce-19004e63b56f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78d9d46969-qr569_calico-system(c70e5e6e-faf5-4f92-89ce-19004e63b56f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3280ada5ad6298fbcba5eb7bce8c729719186e5346cac99eea57e0f9ae021a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" podUID="c70e5e6e-faf5-4f92-89ce-19004e63b56f" Oct 27 07:55:54.915593 containerd[1582]: time="2025-10-27T07:55:54.915531093Z" level=error msg="Failed to destroy network for sandbox \"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.915746 containerd[1582]: time="2025-10-27T07:55:54.915720651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-hbjcq,Uid:8197235e-fb1d-4c19-95db-1c579409d474,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.915911 kubelet[2709]: E1027 07:55:54.915885 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.915970 kubelet[2709]: E1027 07:55:54.915952 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" Oct 27 07:55:54.915996 kubelet[2709]: E1027 07:55:54.915973 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" Oct 27 07:55:54.916035 kubelet[2709]: E1027 07:55:54.916016 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b45bdbf4-hbjcq_calico-apiserver(8197235e-fb1d-4c19-95db-1c579409d474)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b45bdbf4-hbjcq_calico-apiserver(8197235e-fb1d-4c19-95db-1c579409d474)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e1a0c2256df5bf7624327755972173bd2d4f3f3054ac6211d3cb01752e82964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:55:54.916758 containerd[1582]: time="2025-10-27T07:55:54.916722399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sfpcs,Uid:656cce1b-d114-4468-aa23-f4cc0ed0fc43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.916997 kubelet[2709]: E1027 07:55:54.916944 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.916997 kubelet[2709]: E1027 07:55:54.916980 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.917062 kubelet[2709]: E1027 07:55:54.916998 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sfpcs" Oct 27 07:55:54.917062 kubelet[2709]: E1027 07:55:54.917027 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-sfpcs_calico-system(656cce1b-d114-4468-aa23-f4cc0ed0fc43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-sfpcs_calico-system(656cce1b-d114-4468-aa23-f4cc0ed0fc43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"825b3bedfcdff174d1f0cdbc856a1d3b4b791e23cd5276338cdb329467de12b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:55:54.933685 containerd[1582]: time="2025-10-27T07:55:54.933516319Z" level=error msg="Failed to destroy network for sandbox \"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.935227 containerd[1582]: time="2025-10-27T07:55:54.934968902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557595484f-vw9q8,Uid:9fb2f423-4788-4c5c-9c0e-a84c0b4825df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.935326 kubelet[2709]: E1027 07:55:54.935153 2709 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 07:55:54.935326 kubelet[2709]: E1027 07:55:54.935199 2709 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" Oct 27 07:55:54.935326 kubelet[2709]: E1027 07:55:54.935228 2709 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" Oct 27 07:55:54.936189 kubelet[2709]: E1027 07:55:54.935267 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-557595484f-vw9q8_calico-apiserver(9fb2f423-4788-4c5c-9c0e-a84c0b4825df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-557595484f-vw9q8_calico-apiserver(9fb2f423-4788-4c5c-9c0e-a84c0b4825df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2639e21d0c98464c9d6eedad233e3cd82b20e5a9c85fedc9b769e495e12c8b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:55:55.740110 systemd[1]: run-netns-cni\x2d5ad8438f\x2d8241\x2df732\x2d4f85\x2d7d754c8c2f79.mount: Deactivated successfully. Oct 27 07:55:55.740212 systemd[1]: run-netns-cni\x2d38d29ac7\x2d8417\x2d2baf\x2d6785\x2d9a7aaa8c3142.mount: Deactivated successfully. Oct 27 07:55:55.740259 systemd[1]: run-netns-cni\x2d7ab3a5db\x2d5356\x2dac31\x2d038a\x2dec173c7f2d64.mount: Deactivated successfully. Oct 27 07:55:55.740307 systemd[1]: run-netns-cni\x2d7535b1d3\x2dc52c\x2d7de7\x2dd959\x2db75d4c0a4069.mount: Deactivated successfully. Oct 27 07:55:55.740365 systemd[1]: run-netns-cni\x2de3a10117\x2d5dde\x2d38d5\x2ded81\x2d9295e88042ad.mount: Deactivated successfully. Oct 27 07:55:55.740407 systemd[1]: run-netns-cni\x2d08f490ad\x2d5401\x2dac21\x2d3a1b\x2db40ca0d4933b.mount: Deactivated successfully. Oct 27 07:55:55.740447 systemd[1]: run-netns-cni\x2dbc8d504d\x2dccaa\x2d854c\x2d5868\x2dc3406b244a31.mount: Deactivated successfully. Oct 27 07:55:58.698522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372295283.mount: Deactivated successfully. Oct 27 07:55:58.755910 containerd[1582]: time="2025-10-27T07:55:58.748099542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 27 07:55:58.755910 containerd[1582]: time="2025-10-27T07:55:58.751318270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.975173277s" Oct 27 07:55:58.756310 containerd[1582]: time="2025-10-27T07:55:58.755940902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 27 07:55:58.756310 containerd[1582]: time="2025-10-27T07:55:58.755063951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:58.756975 containerd[1582]: time="2025-10-27T07:55:58.756949372Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:58.758391 containerd[1582]: time="2025-10-27T07:55:58.758356518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 07:55:58.771436 containerd[1582]: time="2025-10-27T07:55:58.771393065Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 07:55:58.779020 containerd[1582]: time="2025-10-27T07:55:58.777779520Z" level=info msg="Container 8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:55:58.785671 containerd[1582]: time="2025-10-27T07:55:58.785630280Z" level=info msg="CreateContainer within sandbox \"91c8319d98a2eab7b842601da41b9d0db7fb6a7c0240462666214e5d55c5a9a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7\"" Oct 27 07:55:58.786301 containerd[1582]: time="2025-10-27T07:55:58.786187594Z" level=info msg="StartContainer for \"8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7\"" Oct 27 07:55:58.788024 containerd[1582]: time="2025-10-27T07:55:58.787985655Z" level=info msg="connecting to shim 8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7" address="unix:///run/containerd/s/4075a889ed56869df013a4b0e688533d941a5856e32407f75f474073cee38199" protocol=ttrpc version=3 Oct 27 07:55:58.817516 systemd[1]: Started cri-containerd-8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7.scope - libcontainer container 8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7. Oct 27 07:55:58.853639 containerd[1582]: time="2025-10-27T07:55:58.853590946Z" level=info msg="StartContainer for \"8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7\" returns successfully" Oct 27 07:55:58.973539 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 07:55:58.973998 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 07:55:59.193710 kubelet[2709]: I1027 07:55:59.193670 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-backend-key-pair\") pod \"754e12e0-7fe8-44bd-b549-51099302c4a2\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " Oct 27 07:55:59.194263 kubelet[2709]: I1027 07:55:59.193723 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-ca-bundle\") pod \"754e12e0-7fe8-44bd-b549-51099302c4a2\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " Oct 27 07:55:59.194263 kubelet[2709]: I1027 07:55:59.193749 2709 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr9qz\" (UniqueName: \"kubernetes.io/projected/754e12e0-7fe8-44bd-b549-51099302c4a2-kube-api-access-gr9qz\") pod \"754e12e0-7fe8-44bd-b549-51099302c4a2\" (UID: \"754e12e0-7fe8-44bd-b549-51099302c4a2\") " Oct 27 07:55:59.196110 kubelet[2709]: I1027 07:55:59.195978 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "754e12e0-7fe8-44bd-b549-51099302c4a2" (UID: "754e12e0-7fe8-44bd-b549-51099302c4a2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 07:55:59.199588 kubelet[2709]: I1027 07:55:59.199497 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "754e12e0-7fe8-44bd-b549-51099302c4a2" (UID: "754e12e0-7fe8-44bd-b549-51099302c4a2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 07:55:59.200057 kubelet[2709]: I1027 07:55:59.200005 2709 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754e12e0-7fe8-44bd-b549-51099302c4a2-kube-api-access-gr9qz" (OuterVolumeSpecName: "kube-api-access-gr9qz") pod "754e12e0-7fe8-44bd-b549-51099302c4a2" (UID: "754e12e0-7fe8-44bd-b549-51099302c4a2"). InnerVolumeSpecName "kube-api-access-gr9qz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 07:55:59.294541 kubelet[2709]: I1027 07:55:59.294485 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 27 07:55:59.294541 kubelet[2709]: I1027 07:55:59.294526 2709 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gr9qz\" (UniqueName: \"kubernetes.io/projected/754e12e0-7fe8-44bd-b549-51099302c4a2-kube-api-access-gr9qz\") on node \"localhost\" DevicePath \"\"" Oct 27 07:55:59.294541 kubelet[2709]: I1027 07:55:59.294539 2709 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/754e12e0-7fe8-44bd-b549-51099302c4a2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 27 07:55:59.698754 systemd[1]: var-lib-kubelet-pods-754e12e0\x2d7fe8\x2d44bd\x2db549\x2d51099302c4a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgr9qz.mount: Deactivated successfully. Oct 27 07:55:59.698894 systemd[1]: var-lib-kubelet-pods-754e12e0\x2d7fe8\x2d44bd\x2db549\x2d51099302c4a2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 07:55:59.818635 kubelet[2709]: E1027 07:55:59.818592 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:55:59.827423 systemd[1]: Removed slice kubepods-besteffort-pod754e12e0_7fe8_44bd_b549_51099302c4a2.slice - libcontainer container kubepods-besteffort-pod754e12e0_7fe8_44bd_b549_51099302c4a2.slice. Oct 27 07:55:59.837464 kubelet[2709]: I1027 07:55:59.837138 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-brbsr" podStartSLOduration=2.4461892929999998 podStartE2EDuration="13.836817817s" podCreationTimestamp="2025-10-27 07:55:46 +0000 UTC" firstStartedPulling="2025-10-27 07:55:47.366829003 +0000 UTC m=+22.814949072" lastFinishedPulling="2025-10-27 07:55:58.757457527 +0000 UTC m=+34.205577596" observedRunningTime="2025-10-27 07:55:59.835764148 +0000 UTC m=+35.283884217" watchObservedRunningTime="2025-10-27 07:55:59.836817817 +0000 UTC m=+35.284937926" Oct 27 07:55:59.881913 systemd[1]: Created slice kubepods-besteffort-pod3942c5e9_567d_4de9_af49_f62592fa9e2d.slice - libcontainer container kubepods-besteffort-pod3942c5e9_567d_4de9_af49_f62592fa9e2d.slice. Oct 27 07:55:59.897712 kubelet[2709]: I1027 07:55:59.897663 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzchc\" (UniqueName: \"kubernetes.io/projected/3942c5e9-567d-4de9-af49-f62592fa9e2d-kube-api-access-rzchc\") pod \"whisker-9dc749786-p5zz4\" (UID: \"3942c5e9-567d-4de9-af49-f62592fa9e2d\") " pod="calico-system/whisker-9dc749786-p5zz4" Oct 27 07:55:59.897847 kubelet[2709]: I1027 07:55:59.897705 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3942c5e9-567d-4de9-af49-f62592fa9e2d-whisker-ca-bundle\") pod \"whisker-9dc749786-p5zz4\" (UID: \"3942c5e9-567d-4de9-af49-f62592fa9e2d\") " pod="calico-system/whisker-9dc749786-p5zz4" Oct 27 07:55:59.897847 kubelet[2709]: I1027 07:55:59.897835 2709 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3942c5e9-567d-4de9-af49-f62592fa9e2d-whisker-backend-key-pair\") pod \"whisker-9dc749786-p5zz4\" (UID: \"3942c5e9-567d-4de9-af49-f62592fa9e2d\") " pod="calico-system/whisker-9dc749786-p5zz4" Oct 27 07:56:00.187238 containerd[1582]: time="2025-10-27T07:56:00.187181873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9dc749786-p5zz4,Uid:3942c5e9-567d-4de9-af49-f62592fa9e2d,Namespace:calico-system,Attempt:0,}" Oct 27 07:56:00.410499 systemd-networkd[1479]: cali6adb8e24657: Link UP Oct 27 07:56:00.410840 systemd-networkd[1479]: cali6adb8e24657: Gained carrier Oct 27 07:56:00.427066 containerd[1582]: 2025-10-27 07:56:00.208 [INFO][3911] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:00.427066 containerd[1582]: 2025-10-27 07:56:00.241 [INFO][3911] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--9dc749786--p5zz4-eth0 whisker-9dc749786- calico-system 3942c5e9-567d-4de9-af49-f62592fa9e2d 961 0 2025-10-27 07:55:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9dc749786 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-9dc749786-p5zz4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6adb8e24657 [] [] }} ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-" Oct 27 07:56:00.427066 containerd[1582]: 2025-10-27 07:56:00.241 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.427066 containerd[1582]: 2025-10-27 07:56:00.340 [INFO][3925] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" HandleID="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Workload="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.341 [INFO][3925] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" HandleID="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Workload="localhost-k8s-whisker--9dc749786--p5zz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-9dc749786-p5zz4", "timestamp":"2025-10-27 07:56:00.340859693 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.341 [INFO][3925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.341 [INFO][3925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.342 [INFO][3925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.363 [INFO][3925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" host="localhost" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.372 [INFO][3925] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.378 [INFO][3925] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.380 [INFO][3925] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.382 [INFO][3925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:00.427480 containerd[1582]: 2025-10-27 07:56:00.382 [INFO][3925] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" host="localhost" Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.384 [INFO][3925] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26 Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.388 [INFO][3925] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" host="localhost" Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.396 [INFO][3925] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" host="localhost" Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.396 [INFO][3925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" host="localhost" Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.396 [INFO][3925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:00.427678 containerd[1582]: 2025-10-27 07:56:00.396 [INFO][3925] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" HandleID="k8s-pod-network.dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Workload="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.428169 containerd[1582]: 2025-10-27 07:56:00.402 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9dc749786--p5zz4-eth0", GenerateName:"whisker-9dc749786-", Namespace:"calico-system", SelfLink:"", UID:"3942c5e9-567d-4de9-af49-f62592fa9e2d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9dc749786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-9dc749786-p5zz4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6adb8e24657", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:00.428169 containerd[1582]: 2025-10-27 07:56:00.402 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.428476 containerd[1582]: 2025-10-27 07:56:00.402 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6adb8e24657 ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.428476 containerd[1582]: 2025-10-27 07:56:00.410 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.428530 containerd[1582]: 2025-10-27 07:56:00.410 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--9dc749786--p5zz4-eth0", GenerateName:"whisker-9dc749786-", Namespace:"calico-system", SelfLink:"", UID:"3942c5e9-567d-4de9-af49-f62592fa9e2d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9dc749786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26", Pod:"whisker-9dc749786-p5zz4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6adb8e24657", MAC:"da:6d:20:99:ac:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:00.428583 containerd[1582]: 2025-10-27 07:56:00.422 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" Namespace="calico-system" Pod="whisker-9dc749786-p5zz4" WorkloadEndpoint="localhost-k8s-whisker--9dc749786--p5zz4-eth0" Oct 27 07:56:00.498963 containerd[1582]: time="2025-10-27T07:56:00.498842552Z" level=info msg="connecting to shim dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26" address="unix:///run/containerd/s/4f541b9c1b040217cf217b5f0ad3bcb2f9e3341859c4964c6e7c1aa9f0a80e4d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:00.525502 systemd[1]: Started cri-containerd-dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26.scope - libcontainer container dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26. Oct 27 07:56:00.535917 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:00.554572 containerd[1582]: time="2025-10-27T07:56:00.554525342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9dc749786-p5zz4,Uid:3942c5e9-567d-4de9-af49-f62592fa9e2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc5bb9efcb4f24e5cce23400e20129761f1ade35ee5da6b785c551f3b41d9e26\"" Oct 27 07:56:00.556214 containerd[1582]: time="2025-10-27T07:56:00.556182487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 07:56:00.677436 kubelet[2709]: I1027 07:56:00.677317 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754e12e0-7fe8-44bd-b549-51099302c4a2" path="/var/lib/kubelet/pods/754e12e0-7fe8-44bd-b549-51099302c4a2/volumes" Oct 27 07:56:00.776178 containerd[1582]: time="2025-10-27T07:56:00.776042998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:00.777360 containerd[1582]: time="2025-10-27T07:56:00.777302826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 07:56:00.777456 containerd[1582]: time="2025-10-27T07:56:00.777358305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 07:56:00.777564 kubelet[2709]: E1027 07:56:00.777531 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 07:56:00.777937 kubelet[2709]: E1027 07:56:00.777581 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 07:56:00.777937 kubelet[2709]: E1027 07:56:00.777662 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc749786-p5zz4_calico-system(3942c5e9-567d-4de9-af49-f62592fa9e2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:00.778493 containerd[1582]: time="2025-10-27T07:56:00.778467415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 07:56:00.821753 kubelet[2709]: I1027 07:56:00.821701 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 07:56:00.822178 kubelet[2709]: E1027 07:56:00.822159 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:01.002798 containerd[1582]: time="2025-10-27T07:56:01.002739804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:01.003674 containerd[1582]: time="2025-10-27T07:56:01.003638756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 07:56:01.003731 containerd[1582]: time="2025-10-27T07:56:01.003640876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 07:56:01.004092 kubelet[2709]: E1027 07:56:01.003897 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 07:56:01.004092 kubelet[2709]: E1027 07:56:01.003945 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 07:56:01.004092 kubelet[2709]: E1027 07:56:01.004023 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc749786-p5zz4_calico-system(3942c5e9-567d-4de9-af49-f62592fa9e2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:01.004240 kubelet[2709]: E1027 07:56:01.004061 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc749786-p5zz4" podUID="3942c5e9-567d-4de9-af49-f62592fa9e2d" Oct 27 07:56:01.677456 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:59768.service - OpenSSH per-connection server daemon (10.0.0.1:59768). Oct 27 07:56:01.748917 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 59768 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:01.750309 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:01.754903 systemd-logind[1548]: New session 8 of user core. Oct 27 07:56:01.763506 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 07:56:01.811604 systemd-networkd[1479]: cali6adb8e24657: Gained IPv6LL Oct 27 07:56:01.826384 kubelet[2709]: E1027 07:56:01.826290 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc749786-p5zz4" podUID="3942c5e9-567d-4de9-af49-f62592fa9e2d" Oct 27 07:56:01.904454 sshd[4119]: Connection closed by 10.0.0.1 port 59768 Oct 27 07:56:01.904772 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:01.908372 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:59768.service: Deactivated successfully. Oct 27 07:56:01.911016 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 07:56:01.912234 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Oct 27 07:56:01.913631 systemd-logind[1548]: Removed session 8. Oct 27 07:56:05.666915 kubelet[2709]: E1027 07:56:05.666866 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:05.667747 containerd[1582]: time="2025-10-27T07:56:05.667570725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrs4n,Uid:18d6a388-d738-473e-98de-05b1bf50cdfc,Namespace:kube-system,Attempt:0,}" Oct 27 07:56:05.669070 containerd[1582]: time="2025-10-27T07:56:05.668864674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-hbjcq,Uid:8197235e-fb1d-4c19-95db-1c579409d474,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:56:05.811049 systemd-networkd[1479]: cali57f3f25679c: Link UP Oct 27 07:56:05.813880 systemd-networkd[1479]: cali57f3f25679c: Gained carrier Oct 27 07:56:05.826375 containerd[1582]: 2025-10-27 07:56:05.707 [INFO][4220] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:05.826375 containerd[1582]: 2025-10-27 07:56:05.721 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0 calico-apiserver-68b45bdbf4- calico-apiserver 8197235e-fb1d-4c19-95db-1c579409d474 893 0 2025-10-27 07:55:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b45bdbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68b45bdbf4-hbjcq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali57f3f25679c [] [] }} ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-" Oct 27 07:56:05.826375 containerd[1582]: 2025-10-27 07:56:05.721 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.826375 containerd[1582]: 2025-10-27 07:56:05.745 [INFO][4244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" HandleID="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.745 [INFO][4244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" HandleID="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b1b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68b45bdbf4-hbjcq", "timestamp":"2025-10-27 07:56:05.745572453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.745 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.745 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.745 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.757 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" host="localhost" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.779 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.784 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.786 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.788 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:05.826574 containerd[1582]: 2025-10-27 07:56:05.788 [INFO][4244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" host="localhost" Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.789 [INFO][4244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2 Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.793 [INFO][4244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" host="localhost" Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" host="localhost" Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" host="localhost" Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:05.826766 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" HandleID="k8s-pod-network.f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.826958 containerd[1582]: 2025-10-27 07:56:05.808 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0", GenerateName:"calico-apiserver-68b45bdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8197235e-fb1d-4c19-95db-1c579409d474", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b45bdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68b45bdbf4-hbjcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57f3f25679c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:05.827007 containerd[1582]: 2025-10-27 07:56:05.808 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.827007 containerd[1582]: 2025-10-27 07:56:05.808 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57f3f25679c ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.827007 containerd[1582]: 2025-10-27 07:56:05.812 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.827087 containerd[1582]: 2025-10-27 07:56:05.812 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0", GenerateName:"calico-apiserver-68b45bdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8197235e-fb1d-4c19-95db-1c579409d474", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b45bdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2", Pod:"calico-apiserver-68b45bdbf4-hbjcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali57f3f25679c", MAC:"d2:ce:7d:8f:52:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:05.827144 containerd[1582]: 2025-10-27 07:56:05.822 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-hbjcq" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--hbjcq-eth0" Oct 27 07:56:05.863818 containerd[1582]: time="2025-10-27T07:56:05.862710905Z" level=info msg="connecting to shim f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2" address="unix:///run/containerd/s/9483165109017b3822e0efe036aa5d155398deddc26de00a026b410b03ddffca" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:05.895524 systemd[1]: Started cri-containerd-f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2.scope - libcontainer container f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2. Oct 27 07:56:05.910206 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:05.917316 systemd-networkd[1479]: cali64f3305b9e9: Link UP Oct 27 07:56:05.917821 systemd-networkd[1479]: cali64f3305b9e9: Gained carrier Oct 27 07:56:05.930382 containerd[1582]: 2025-10-27 07:56:05.708 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:05.930382 containerd[1582]: 2025-10-27 07:56:05.728 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nrs4n-eth0 coredns-66bc5c9577- kube-system 18d6a388-d738-473e-98de-05b1bf50cdfc 883 0 2025-10-27 07:55:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nrs4n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64f3305b9e9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-" Oct 27 07:56:05.930382 containerd[1582]: 2025-10-27 07:56:05.728 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930382 containerd[1582]: 2025-10-27 07:56:05.756 [INFO][4249] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" HandleID="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Workload="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.756 [INFO][4249] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" HandleID="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Workload="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nrs4n", "timestamp":"2025-10-27 07:56:05.756214927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.756 [INFO][4249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.800 [INFO][4249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.861 [INFO][4249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" host="localhost" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.881 [INFO][4249] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.893 [INFO][4249] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.894 [INFO][4249] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.897 [INFO][4249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:05.930587 containerd[1582]: 2025-10-27 07:56:05.897 [INFO][4249] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" host="localhost" Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.901 [INFO][4249] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.906 [INFO][4249] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" host="localhost" Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.911 [INFO][4249] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" host="localhost" Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.911 [INFO][4249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" host="localhost" Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.911 [INFO][4249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:05.930785 containerd[1582]: 2025-10-27 07:56:05.911 [INFO][4249] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" HandleID="k8s-pod-network.8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Workload="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.914 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nrs4n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"18d6a388-d738-473e-98de-05b1bf50cdfc", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nrs4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f3305b9e9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.914 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.914 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64f3305b9e9 ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.915 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.916 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nrs4n-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"18d6a388-d738-473e-98de-05b1bf50cdfc", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b", Pod:"coredns-66bc5c9577-nrs4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64f3305b9e9", MAC:"26:11:f9:da:de:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:05.930896 containerd[1582]: 2025-10-27 07:56:05.926 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" Namespace="kube-system" Pod="coredns-66bc5c9577-nrs4n" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nrs4n-eth0" Oct 27 07:56:05.941267 containerd[1582]: time="2025-10-27T07:56:05.941232709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-hbjcq,Uid:8197235e-fb1d-4c19-95db-1c579409d474,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f8b804d64f5046a2f2e31ca984162bf8c72b8c5fe93661572e18e0ed4e6e7fc2\"" Oct 27 07:56:05.943093 containerd[1582]: time="2025-10-27T07:56:05.942884096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:05.951805 containerd[1582]: time="2025-10-27T07:56:05.951774744Z" level=info msg="connecting to shim 8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b" address="unix:///run/containerd/s/8fa89f1649fde4a8409e8bceb91e0cece6b400c793c5205d23045b83d0372ddb" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:05.977490 systemd[1]: Started cri-containerd-8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b.scope - libcontainer container 8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b. Oct 27 07:56:05.987913 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:06.007182 containerd[1582]: time="2025-10-27T07:56:06.007148657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrs4n,Uid:18d6a388-d738-473e-98de-05b1bf50cdfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b\"" Oct 27 07:56:06.009811 kubelet[2709]: E1027 07:56:06.009776 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:06.013954 containerd[1582]: time="2025-10-27T07:56:06.013914003Z" level=info msg="CreateContainer within sandbox \"8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 07:56:06.021016 containerd[1582]: time="2025-10-27T07:56:06.020986468Z" level=info msg="Container 8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:56:06.025488 containerd[1582]: time="2025-10-27T07:56:06.025452713Z" level=info msg="CreateContainer within sandbox \"8819617c143fff77c44f303abf6f8e5ad448401b2819b36674d8ed3e9277971b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03\"" Oct 27 07:56:06.026107 containerd[1582]: time="2025-10-27T07:56:06.025944189Z" level=info msg="StartContainer for \"8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03\"" Oct 27 07:56:06.026765 containerd[1582]: time="2025-10-27T07:56:06.026688823Z" level=info msg="connecting to shim 8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03" address="unix:///run/containerd/s/8fa89f1649fde4a8409e8bceb91e0cece6b400c793c5205d23045b83d0372ddb" protocol=ttrpc version=3 Oct 27 07:56:06.053490 systemd[1]: Started cri-containerd-8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03.scope - libcontainer container 8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03. Oct 27 07:56:06.083968 containerd[1582]: time="2025-10-27T07:56:06.083931573Z" level=info msg="StartContainer for \"8814b5ee45e16618f93c1d3f7053ef3960833d878cd97b5dabe7409ad9fe6b03\" returns successfully" Oct 27 07:56:06.163723 containerd[1582]: time="2025-10-27T07:56:06.163674385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:06.164628 containerd[1582]: time="2025-10-27T07:56:06.164454499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:06.164628 containerd[1582]: time="2025-10-27T07:56:06.164506059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:06.164757 kubelet[2709]: E1027 07:56:06.164690 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:06.164828 kubelet[2709]: E1027 07:56:06.164768 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:06.165133 kubelet[2709]: E1027 07:56:06.164842 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68b45bdbf4-hbjcq_calico-apiserver(8197235e-fb1d-4c19-95db-1c579409d474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:06.165133 kubelet[2709]: E1027 07:56:06.164883 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:56:06.668905 containerd[1582]: time="2025-10-27T07:56:06.668864051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d9d46969-qr569,Uid:c70e5e6e-faf5-4f92-89ce-19004e63b56f,Namespace:calico-system,Attempt:0,}" Oct 27 07:56:06.670250 containerd[1582]: time="2025-10-27T07:56:06.670214240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-4zxr4,Uid:d6332fd8-67a0-4328-a949-abb03ff66ef6,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:56:06.671160 kubelet[2709]: E1027 07:56:06.671048 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:06.671924 containerd[1582]: time="2025-10-27T07:56:06.671901307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wltmj,Uid:01432c6e-434d-4261-a178-18e07a695baf,Namespace:kube-system,Attempt:0,}" Oct 27 07:56:06.672791 containerd[1582]: time="2025-10-27T07:56:06.672764900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557595484f-vw9q8,Uid:9fb2f423-4788-4c5c-9c0e-a84c0b4825df,Namespace:calico-apiserver,Attempt:0,}" Oct 27 07:56:06.818903 systemd-networkd[1479]: calib202455667c: Link UP Oct 27 07:56:06.819055 systemd-networkd[1479]: calib202455667c: Gained carrier Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.724 [INFO][4437] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.746 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0 calico-apiserver-68b45bdbf4- calico-apiserver d6332fd8-67a0-4328-a949-abb03ff66ef6 890 0 2025-10-27 07:55:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b45bdbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68b45bdbf4-4zxr4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib202455667c [] [] }} ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.747 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.775 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" HandleID="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.776 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" HandleID="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68b45bdbf4-4zxr4", "timestamp":"2025-10-27 07:56:06.775921689 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.776 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.776 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.776 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.787 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.793 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.800 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.802 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.804 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.804 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.806 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48 Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.810 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.814 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.815 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" host="localhost" Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.815 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:06.834590 containerd[1582]: 2025-10-27 07:56:06.815 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" HandleID="k8s-pod-network.4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Workload="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.817 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0", GenerateName:"calico-apiserver-68b45bdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6332fd8-67a0-4328-a949-abb03ff66ef6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b45bdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68b45bdbf4-4zxr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib202455667c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.817 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.817 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib202455667c ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.819 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.820 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0", GenerateName:"calico-apiserver-68b45bdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6332fd8-67a0-4328-a949-abb03ff66ef6", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b45bdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48", Pod:"calico-apiserver-68b45bdbf4-4zxr4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib202455667c", MAC:"3a:14:37:88:ed:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:06.835108 containerd[1582]: 2025-10-27 07:56:06.833 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" Namespace="calico-apiserver" Pod="calico-apiserver-68b45bdbf4-4zxr4" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b45bdbf4--4zxr4-eth0" Oct 27 07:56:06.841701 kubelet[2709]: E1027 07:56:06.840796 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:06.842947 kubelet[2709]: E1027 07:56:06.842880 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:56:06.857561 kubelet[2709]: I1027 07:56:06.857506 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nrs4n" podStartSLOduration=36.857489207 podStartE2EDuration="36.857489207s" podCreationTimestamp="2025-10-27 07:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:56:06.856726973 +0000 UTC m=+42.304847042" watchObservedRunningTime="2025-10-27 07:56:06.857489207 +0000 UTC m=+42.305609276" Oct 27 07:56:06.865368 containerd[1582]: time="2025-10-27T07:56:06.864512512Z" level=info msg="connecting to shim 4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48" address="unix:///run/containerd/s/52a6f3276a9490b930bcf422d5af9c9b40f8d8183d31c34dd7fd84748481f451" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:06.905676 systemd[1]: Started cri-containerd-4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48.scope - libcontainer container 4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48. Oct 27 07:56:06.916893 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:59782.service - OpenSSH per-connection server daemon (10.0.0.1:59782). Oct 27 07:56:06.928149 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:06.959273 systemd-networkd[1479]: calid0237067fdf: Link UP Oct 27 07:56:06.960144 systemd-networkd[1479]: calid0237067fdf: Gained carrier Oct 27 07:56:06.984170 containerd[1582]: time="2025-10-27T07:56:06.984120971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b45bdbf4-4zxr4,Uid:d6332fd8-67a0-4328-a949-abb03ff66ef6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c009187f91c5b574b0f85fe26d51ac56109abd443a86a38426df362ca5e6c48\"" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.726 [INFO][4457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.745 [INFO][4457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0 calico-apiserver-557595484f- calico-apiserver 9fb2f423-4788-4c5c-9c0e-a84c0b4825df 891 0 2025-10-27 07:55:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:557595484f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-557595484f-vw9q8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0237067fdf [] [] }} ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.745 [INFO][4457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.777 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" HandleID="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Workload="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.777 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" HandleID="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Workload="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-557595484f-vw9q8", "timestamp":"2025-10-27 07:56:06.777202199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.777 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.815 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.815 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.889 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.900 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.906 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.909 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.915 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.915 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.919 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481 Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.925 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.934 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.934 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" host="localhost" Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.934 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:06.986985 containerd[1582]: 2025-10-27 07:56:06.934 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" HandleID="k8s-pod-network.03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Workload="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.943 [INFO][4457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0", GenerateName:"calico-apiserver-557595484f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fb2f423-4788-4c5c-9c0e-a84c0b4825df", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557595484f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-557595484f-vw9q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0237067fdf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.943 [INFO][4457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.943 [INFO][4457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0237067fdf ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.962 [INFO][4457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.966 [INFO][4457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0", GenerateName:"calico-apiserver-557595484f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fb2f423-4788-4c5c-9c0e-a84c0b4825df", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"557595484f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481", Pod:"calico-apiserver-557595484f-vw9q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0237067fdf", MAC:"52:df:34:3c:c2:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:06.987839 containerd[1582]: 2025-10-27 07:56:06.975 [INFO][4457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" Namespace="calico-apiserver" Pod="calico-apiserver-557595484f-vw9q8" WorkloadEndpoint="localhost-k8s-calico--apiserver--557595484f--vw9q8-eth0" Oct 27 07:56:06.993004 containerd[1582]: time="2025-10-27T07:56:06.992969621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:06.996473 systemd-networkd[1479]: cali64f3305b9e9: Gained IPv6LL Oct 27 07:56:07.017004 sshd[4579]: Accepted publickey for core from 10.0.0.1 port 59782 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:07.022108 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:07.038307 systemd-logind[1548]: New session 9 of user core. Oct 27 07:56:07.040816 containerd[1582]: time="2025-10-27T07:56:07.040769894Z" level=info msg="connecting to shim 03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481" address="unix:///run/containerd/s/8309f9a998ada169521c4a9231a20a3d7e5c70c3fa643d829bea5831f1cc3e52" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:07.048783 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 07:56:07.067317 systemd-networkd[1479]: cali61abd725e8a: Link UP Oct 27 07:56:07.068427 systemd-networkd[1479]: cali61abd725e8a: Gained carrier Oct 27 07:56:07.076537 systemd[1]: Started cri-containerd-03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481.scope - libcontainer container 03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481. Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.728 [INFO][4424] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.748 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0 calico-kube-controllers-78d9d46969- calico-system c70e5e6e-faf5-4f92-89ce-19004e63b56f 894 0 2025-10-27 07:55:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78d9d46969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78d9d46969-qr569 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali61abd725e8a [] [] }} ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.749 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.782 [INFO][4500] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" HandleID="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Workload="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.782 [INFO][4500] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" HandleID="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Workload="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78d9d46969-qr569", "timestamp":"2025-10-27 07:56:06.78210932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.782 [INFO][4500] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.934 [INFO][4500] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.935 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:06.989 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.004 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.011 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.013 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.017 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.018 [INFO][4500] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.022 [INFO][4500] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0 Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.035 [INFO][4500] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4500] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" host="localhost" Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4500] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:07.082960 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4500] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" HandleID="k8s-pod-network.18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Workload="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.062 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0", GenerateName:"calico-kube-controllers-78d9d46969-", Namespace:"calico-system", SelfLink:"", UID:"c70e5e6e-faf5-4f92-89ce-19004e63b56f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d9d46969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78d9d46969-qr569", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali61abd725e8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.062 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.062 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61abd725e8a ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.067 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.069 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0", GenerateName:"calico-kube-controllers-78d9d46969-", Namespace:"calico-system", SelfLink:"", UID:"c70e5e6e-faf5-4f92-89ce-19004e63b56f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d9d46969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0", Pod:"calico-kube-controllers-78d9d46969-qr569", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali61abd725e8a", MAC:"f2:d6:4a:24:fc:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.083786 containerd[1582]: 2025-10-27 07:56:07.080 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" Namespace="calico-system" Pod="calico-kube-controllers-78d9d46969-qr569" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d9d46969--qr569-eth0" Oct 27 07:56:07.094667 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:07.103449 containerd[1582]: time="2025-10-27T07:56:07.103410295Z" level=info msg="connecting to shim 18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0" address="unix:///run/containerd/s/a01e15155d6cd0dc82afdd0b685be3edacbd37a62c49c16de3ac37d23a15f1f1" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:07.133360 containerd[1582]: time="2025-10-27T07:56:07.133299746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-557595484f-vw9q8,Uid:9fb2f423-4788-4c5c-9c0e-a84c0b4825df,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"03c0d58aed15079369ccf2737bf9b3612a405afb3b82a3d064e7a2ba0d145481\"" Oct 27 07:56:07.141527 systemd[1]: Started cri-containerd-18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0.scope - libcontainer container 18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0. Oct 27 07:56:07.144050 systemd-networkd[1479]: cali158f6db6490: Link UP Oct 27 07:56:07.145035 systemd-networkd[1479]: cali158f6db6490: Gained carrier Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.727 [INFO][4443] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.745 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wltmj-eth0 coredns-66bc5c9577- kube-system 01432c6e-434d-4261-a178-18e07a695baf 889 0 2025-10-27 07:55:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wltmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali158f6db6490 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.745 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.788 [INFO][4483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" HandleID="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Workload="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.788 [INFO][4483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" HandleID="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Workload="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3b70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wltmj", "timestamp":"2025-10-27 07:56:06.788105193 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:06.788 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.052 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.088 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.104 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.112 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.115 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.117 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.117 [INFO][4483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.119 [INFO][4483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514 Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.123 [INFO][4483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.132 [INFO][4483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.132 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" host="localhost" Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.133 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:07.156638 containerd[1582]: 2025-10-27 07:56:07.133 [INFO][4483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" HandleID="k8s-pod-network.83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Workload="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.138 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wltmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01432c6e-434d-4261-a178-18e07a695baf", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wltmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali158f6db6490", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.138 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.138 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali158f6db6490 ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.145 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.145 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wltmj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"01432c6e-434d-4261-a178-18e07a695baf", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514", Pod:"coredns-66bc5c9577-wltmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali158f6db6490", MAC:"a2:2e:9f:be:0e:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.157171 containerd[1582]: 2025-10-27 07:56:07.154 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" Namespace="kube-system" Pod="coredns-66bc5c9577-wltmj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wltmj-eth0" Oct 27 07:56:07.164710 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:07.180588 containerd[1582]: time="2025-10-27T07:56:07.180384706Z" level=info msg="connecting to shim 83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514" address="unix:///run/containerd/s/a9a836dd869d96b97053ea1e76b520a453172b2a9ed4766ad787e665a9ea64e1" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:07.206516 systemd[1]: Started cri-containerd-83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514.scope - libcontainer container 83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514. Oct 27 07:56:07.211434 containerd[1582]: time="2025-10-27T07:56:07.211375669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d9d46969-qr569,Uid:c70e5e6e-faf5-4f92-89ce-19004e63b56f,Namespace:calico-system,Attempt:0,} returns sandbox id \"18e2aafccedf932c9a6382015ada3e4a3e0fad8d73fff17c8d951dfc66a621a0\"" Oct 27 07:56:07.220676 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:07.247005 containerd[1582]: time="2025-10-27T07:56:07.246956636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wltmj,Uid:01432c6e-434d-4261-a178-18e07a695baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514\"" Oct 27 07:56:07.248187 kubelet[2709]: E1027 07:56:07.248138 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:07.253342 containerd[1582]: time="2025-10-27T07:56:07.253297268Z" level=info msg="CreateContainer within sandbox \"83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 07:56:07.268294 containerd[1582]: time="2025-10-27T07:56:07.268242434Z" level=info msg="Container c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada: CDI devices from CRI Config.CDIDevices: []" Oct 27 07:56:07.274106 containerd[1582]: time="2025-10-27T07:56:07.274046669Z" level=info msg="CreateContainer within sandbox \"83ba3caeae1ed48fc6307e735216eb2e4099c95c854a3de541b68a72833dd514\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada\"" Oct 27 07:56:07.274771 containerd[1582]: time="2025-10-27T07:56:07.274540825Z" level=info msg="StartContainer for \"c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada\"" Oct 27 07:56:07.276468 containerd[1582]: time="2025-10-27T07:56:07.276398331Z" level=info msg="connecting to shim c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada" address="unix:///run/containerd/s/a9a836dd869d96b97053ea1e76b520a453172b2a9ed4766ad787e665a9ea64e1" protocol=ttrpc version=3 Oct 27 07:56:07.284562 sshd[4635]: Connection closed by 10.0.0.1 port 59782 Oct 27 07:56:07.285277 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:07.289283 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:59782.service: Deactivated successfully. Oct 27 07:56:07.291863 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 07:56:07.293888 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Oct 27 07:56:07.295406 systemd-logind[1548]: Removed session 9. Oct 27 07:56:07.309630 systemd[1]: Started cri-containerd-c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada.scope - libcontainer container c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada. Oct 27 07:56:07.335091 containerd[1582]: time="2025-10-27T07:56:07.335000763Z" level=info msg="StartContainer for \"c700313221c2555f627a134c9e26fd29c996d3ebc77fe3c3968b105c548edada\" returns successfully" Oct 27 07:56:07.491950 containerd[1582]: time="2025-10-27T07:56:07.491815883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:07.492936 containerd[1582]: time="2025-10-27T07:56:07.492882635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:07.493001 containerd[1582]: time="2025-10-27T07:56:07.492942354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:07.493145 kubelet[2709]: E1027 07:56:07.493109 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:07.493204 kubelet[2709]: E1027 07:56:07.493154 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:07.493506 kubelet[2709]: E1027 07:56:07.493368 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68b45bdbf4-4zxr4_calico-apiserver(d6332fd8-67a0-4328-a949-abb03ff66ef6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:07.493506 kubelet[2709]: E1027 07:56:07.493468 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6" Oct 27 07:56:07.493604 containerd[1582]: time="2025-10-27T07:56:07.493475830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:07.635550 systemd-networkd[1479]: cali57f3f25679c: Gained IPv6LL Oct 27 07:56:07.698988 containerd[1582]: time="2025-10-27T07:56:07.698948978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:07.736300 containerd[1582]: time="2025-10-27T07:56:07.736237333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sfpcs,Uid:656cce1b-d114-4468-aa23-f4cc0ed0fc43,Namespace:calico-system,Attempt:0,}" Oct 27 07:56:07.736300 containerd[1582]: time="2025-10-27T07:56:07.736310932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:07.736537 containerd[1582]: time="2025-10-27T07:56:07.736355852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:07.736561 kubelet[2709]: E1027 07:56:07.736514 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:07.737479 kubelet[2709]: E1027 07:56:07.736576 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:07.737479 kubelet[2709]: E1027 07:56:07.736731 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-557595484f-vw9q8_calico-apiserver(9fb2f423-4788-4c5c-9c0e-a84c0b4825df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:07.737479 kubelet[2709]: E1027 07:56:07.736774 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:56:07.737592 containerd[1582]: time="2025-10-27T07:56:07.736902328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 07:56:07.846957 kubelet[2709]: E1027 07:56:07.846599 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:07.848680 systemd-networkd[1479]: cali29edca7b793: Link UP Oct 27 07:56:07.849325 systemd-networkd[1479]: cali29edca7b793: Gained carrier Oct 27 07:56:07.864315 kubelet[2709]: I1027 07:56:07.864229 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wltmj" podStartSLOduration=37.863986036 podStartE2EDuration="37.863986036s" podCreationTimestamp="2025-10-27 07:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 07:56:07.862566646 +0000 UTC m=+43.310686715" watchObservedRunningTime="2025-10-27 07:56:07.863986036 +0000 UTC m=+43.312106105" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.759 [INFO][4807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.774 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--sfpcs-eth0 goldmane-7c778bb748- calico-system 656cce1b-d114-4468-aa23-f4cc0ed0fc43 892 0 2025-10-27 07:55:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-sfpcs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali29edca7b793 [] [] }} ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.774 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.795 [INFO][4821] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" HandleID="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Workload="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.795 [INFO][4821] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" HandleID="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Workload="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-sfpcs", "timestamp":"2025-10-27 07:56:07.795805397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.796 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.796 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.796 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.805 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.815 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.820 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.824 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.827 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.827 [INFO][4821] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.829 [INFO][4821] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8 Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.833 [INFO][4821] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.840 [INFO][4821] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.840 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" host="localhost" Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.840 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:07.865986 containerd[1582]: 2025-10-27 07:56:07.840 [INFO][4821] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" HandleID="k8s-pod-network.d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Workload="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.843 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--sfpcs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"656cce1b-d114-4468-aa23-f4cc0ed0fc43", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-sfpcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29edca7b793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.843 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.844 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29edca7b793 ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.850 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.851 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--sfpcs-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"656cce1b-d114-4468-aa23-f4cc0ed0fc43", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8", Pod:"goldmane-7c778bb748-sfpcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29edca7b793", MAC:"82:68:f7:d4:70:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:07.867675 containerd[1582]: 2025-10-27 07:56:07.861 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" Namespace="calico-system" Pod="goldmane-7c778bb748-sfpcs" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--sfpcs-eth0" Oct 27 07:56:07.875819 kubelet[2709]: E1027 07:56:07.875779 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:56:07.880416 kubelet[2709]: E1027 07:56:07.880298 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:07.880617 kubelet[2709]: E1027 07:56:07.880590 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6" Oct 27 07:56:07.883102 kubelet[2709]: E1027 07:56:07.880579 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:56:07.906615 containerd[1582]: time="2025-10-27T07:56:07.906500870Z" level=info msg="connecting to shim d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8" address="unix:///run/containerd/s/6451e25444efe5037aa1c64234f60f4ae3e9c535de214cd01e85513e4cb7e3fa" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:07.941555 systemd[1]: Started cri-containerd-d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8.scope - libcontainer container d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8. Oct 27 07:56:07.953851 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:07.956513 containerd[1582]: time="2025-10-27T07:56:07.956479448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:07.957211 containerd[1582]: time="2025-10-27T07:56:07.957169403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 07:56:07.957280 containerd[1582]: time="2025-10-27T07:56:07.957247762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 07:56:07.957466 kubelet[2709]: E1027 07:56:07.957433 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 07:56:07.957515 kubelet[2709]: E1027 07:56:07.957476 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 07:56:07.957567 kubelet[2709]: E1027 07:56:07.957547 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-78d9d46969-qr569_calico-system(c70e5e6e-faf5-4f92-89ce-19004e63b56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:07.957664 kubelet[2709]: E1027 07:56:07.957581 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" podUID="c70e5e6e-faf5-4f92-89ce-19004e63b56f" Oct 27 07:56:07.974918 containerd[1582]: time="2025-10-27T07:56:07.974879667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sfpcs,Uid:656cce1b-d114-4468-aa23-f4cc0ed0fc43,Namespace:calico-system,Attempt:0,} returns sandbox id \"d99b65d27dc4f639917b3c35ea73f75d71468d5463448f22d0302a9ec6630aa8\"" Oct 27 07:56:07.976499 containerd[1582]: time="2025-10-27T07:56:07.976468535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 07:56:08.196322 containerd[1582]: time="2025-10-27T07:56:08.196164414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:08.197678 containerd[1582]: time="2025-10-27T07:56:08.197605883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 07:56:08.197773 containerd[1582]: time="2025-10-27T07:56:08.197693562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:08.197917 kubelet[2709]: E1027 07:56:08.197874 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 07:56:08.197955 kubelet[2709]: E1027 07:56:08.197925 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 07:56:08.198061 kubelet[2709]: E1027 07:56:08.198014 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sfpcs_calico-system(656cce1b-d114-4468-aa23-f4cc0ed0fc43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:08.198061 kubelet[2709]: E1027 07:56:08.198049 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:56:08.531494 systemd-networkd[1479]: calid0237067fdf: Gained IPv6LL Oct 27 07:56:08.595511 systemd-networkd[1479]: cali61abd725e8a: Gained IPv6LL Oct 27 07:56:08.787506 systemd-networkd[1479]: calib202455667c: Gained IPv6LL Oct 27 07:56:08.851471 systemd-networkd[1479]: cali158f6db6490: Gained IPv6LL Oct 27 07:56:08.882646 kubelet[2709]: E1027 07:56:08.882553 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:08.883519 kubelet[2709]: E1027 07:56:08.883009 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:56:08.883519 kubelet[2709]: E1027 07:56:08.883156 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6" Oct 27 07:56:08.883519 kubelet[2709]: E1027 07:56:08.883614 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" podUID="c70e5e6e-faf5-4f92-89ce-19004e63b56f" Oct 27 07:56:08.885838 kubelet[2709]: E1027 07:56:08.885544 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:56:09.043492 systemd-networkd[1479]: cali29edca7b793: Gained IPv6LL Oct 27 07:56:09.885084 kubelet[2709]: E1027 07:56:09.885024 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:56:09.885448 kubelet[2709]: E1027 07:56:09.885080 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:10.668747 containerd[1582]: time="2025-10-27T07:56:10.668645833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vvvz,Uid:cea2b2ed-e433-4f33-b71d-afa53cd98b5f,Namespace:calico-system,Attempt:0,}" Oct 27 07:56:10.824112 systemd-networkd[1479]: cali8f92c424a3b: Link UP Oct 27 07:56:10.824394 systemd-networkd[1479]: cali8f92c424a3b: Gained carrier Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.721 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.738 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7vvvz-eth0 csi-node-driver- calico-system cea2b2ed-e433-4f33-b71d-afa53cd98b5f 797 0 2025-10-27 07:55:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7vvvz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f92c424a3b [] [] }} ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.738 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.777 [INFO][4981] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" HandleID="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Workload="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.778 [INFO][4981] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" HandleID="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Workload="localhost-k8s-csi--node--driver--7vvvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7vvvz", "timestamp":"2025-10-27 07:56:10.777678461 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.778 [INFO][4981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.778 [INFO][4981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.778 [INFO][4981] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.787 [INFO][4981] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.794 [INFO][4981] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.801 [INFO][4981] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.803 [INFO][4981] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.805 [INFO][4981] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.805 [INFO][4981] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.807 [INFO][4981] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24 Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.812 [INFO][4981] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.819 [INFO][4981] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.819 [INFO][4981] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" host="localhost" Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.819 [INFO][4981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 07:56:10.837244 containerd[1582]: 2025-10-27 07:56:10.819 [INFO][4981] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" HandleID="k8s-pod-network.f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Workload="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.821 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7vvvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea2b2ed-e433-4f33-b71d-afa53cd98b5f", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7vvvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f92c424a3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.821 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.822 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f92c424a3b ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.823 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.824 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7vvvz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea2b2ed-e433-4f33-b71d-afa53cd98b5f", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 7, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24", Pod:"csi-node-driver-7vvvz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f92c424a3b", MAC:"52:ae:d5:63:c7:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 07:56:10.837927 containerd[1582]: 2025-10-27 07:56:10.833 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" Namespace="calico-system" Pod="csi-node-driver-7vvvz" WorkloadEndpoint="localhost-k8s-csi--node--driver--7vvvz-eth0" Oct 27 07:56:10.857946 containerd[1582]: time="2025-10-27T07:56:10.857620175Z" level=info msg="connecting to shim f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24" address="unix:///run/containerd/s/e0763914281694ab6efa4f4463403436c835502db0f9ab8ea04f4ead447de601" namespace=k8s.io protocol=ttrpc version=3 Oct 27 07:56:10.880513 systemd[1]: Started cri-containerd-f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24.scope - libcontainer container f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24. Oct 27 07:56:10.889938 systemd-resolved[1273]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 07:56:10.901855 containerd[1582]: time="2025-10-27T07:56:10.901817622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7vvvz,Uid:cea2b2ed-e433-4f33-b71d-afa53cd98b5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7598d4f78f367be9df90cd3cfee61a081aa18067aee33daf55bee6b4ec2ca24\"" Oct 27 07:56:10.903533 containerd[1582]: time="2025-10-27T07:56:10.903504650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 07:56:11.102448 containerd[1582]: time="2025-10-27T07:56:11.102393059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:11.103349 containerd[1582]: time="2025-10-27T07:56:11.103278053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 07:56:11.103973 containerd[1582]: time="2025-10-27T07:56:11.103372132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 07:56:11.104031 kubelet[2709]: E1027 07:56:11.103556 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 07:56:11.104031 kubelet[2709]: E1027 07:56:11.103606 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 07:56:11.104031 kubelet[2709]: E1027 07:56:11.103683 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:11.104522 containerd[1582]: time="2025-10-27T07:56:11.104489724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 07:56:11.326734 containerd[1582]: time="2025-10-27T07:56:11.326583309Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:11.328152 containerd[1582]: time="2025-10-27T07:56:11.328041259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 07:56:11.328152 containerd[1582]: time="2025-10-27T07:56:11.328108418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 07:56:11.328382 kubelet[2709]: E1027 07:56:11.328343 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 07:56:11.328430 kubelet[2709]: E1027 07:56:11.328393 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 07:56:11.328543 kubelet[2709]: E1027 07:56:11.328496 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:11.328631 kubelet[2709]: E1027 07:56:11.328570 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:56:11.890442 kubelet[2709]: E1027 07:56:11.890393 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:56:12.306541 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:34708.service - OpenSSH per-connection server daemon (10.0.0.1:34708). Oct 27 07:56:12.372372 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 34708 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:12.377252 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:12.383040 systemd-logind[1548]: New session 10 of user core. Oct 27 07:56:12.389523 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 07:56:12.572213 sshd[5093]: Connection closed by 10.0.0.1 port 34708 Oct 27 07:56:12.573154 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:12.580711 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:34708.service: Deactivated successfully. Oct 27 07:56:12.582626 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 07:56:12.584530 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Oct 27 07:56:12.586886 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:34714.service - OpenSSH per-connection server daemon (10.0.0.1:34714). Oct 27 07:56:12.588532 systemd-logind[1548]: Removed session 10. Oct 27 07:56:12.627461 systemd-networkd[1479]: cali8f92c424a3b: Gained IPv6LL Oct 27 07:56:12.649775 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 34714 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:12.651166 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:12.656277 systemd-logind[1548]: New session 11 of user core. Oct 27 07:56:12.661476 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 07:56:12.819130 sshd[5111]: Connection closed by 10.0.0.1 port 34714 Oct 27 07:56:12.820742 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:12.830186 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:34714.service: Deactivated successfully. Oct 27 07:56:12.832430 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 07:56:12.833554 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Oct 27 07:56:12.839662 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:34730.service - OpenSSH per-connection server daemon (10.0.0.1:34730). Oct 27 07:56:12.841857 systemd-logind[1548]: Removed session 11. Oct 27 07:56:12.892447 kubelet[2709]: E1027 07:56:12.892400 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:56:12.910921 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 34730 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:12.912096 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:12.915850 systemd-logind[1548]: New session 12 of user core. Oct 27 07:56:12.922477 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 07:56:12.984915 kubelet[2709]: I1027 07:56:12.984875 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 07:56:12.985295 kubelet[2709]: E1027 07:56:12.985279 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:13.059716 sshd[5126]: Connection closed by 10.0.0.1 port 34730 Oct 27 07:56:13.060029 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:13.065032 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:34730.service: Deactivated successfully. Oct 27 07:56:13.068948 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 07:56:13.069859 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Oct 27 07:56:13.071474 systemd-logind[1548]: Removed session 12. Oct 27 07:56:13.103410 containerd[1582]: time="2025-10-27T07:56:13.103374972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7\" id:\"7631b06e2b667aa4fa15795ae77c52ad9cbc4ec3e09446c9b432edef19db3d4b\" pid:5148 exit_status:1 exited_at:{seconds:1761551773 nanos:103086974}" Oct 27 07:56:13.171400 containerd[1582]: time="2025-10-27T07:56:13.171359923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b9a1e300edaf299ec5352763f8a31e4f1cb8cd97f21ba61bc67e69c766468b7\" id:\"6812d264aafc697ba09ccce2466fbdaa269a7f3dc7060d4e8c3c8b1bf13da266\" pid:5175 exit_status:1 exited_at:{seconds:1761551773 nanos:170863526}" Oct 27 07:56:13.516149 kubelet[2709]: I1027 07:56:13.515436 2709 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 07:56:13.516798 kubelet[2709]: E1027 07:56:13.516760 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:13.893872 kubelet[2709]: E1027 07:56:13.893845 2709 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 07:56:14.120816 systemd-networkd[1479]: vxlan.calico: Link UP Oct 27 07:56:14.120825 systemd-networkd[1479]: vxlan.calico: Gained carrier Oct 27 07:56:15.667110 containerd[1582]: time="2025-10-27T07:56:15.667032380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 07:56:15.827475 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Oct 27 07:56:15.896460 containerd[1582]: time="2025-10-27T07:56:15.896410405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:15.903088 containerd[1582]: time="2025-10-27T07:56:15.903040123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 07:56:15.903182 containerd[1582]: time="2025-10-27T07:56:15.903120162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 07:56:15.903305 kubelet[2709]: E1027 07:56:15.903265 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 07:56:15.903586 kubelet[2709]: E1027 07:56:15.903312 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 07:56:15.903586 kubelet[2709]: E1027 07:56:15.903416 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9dc749786-p5zz4_calico-system(3942c5e9-567d-4de9-af49-f62592fa9e2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:15.904387 containerd[1582]: time="2025-10-27T07:56:15.904319994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 07:56:16.112328 containerd[1582]: time="2025-10-27T07:56:16.112262329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:16.122501 containerd[1582]: time="2025-10-27T07:56:16.122435825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 07:56:16.122643 containerd[1582]: time="2025-10-27T07:56:16.122559265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 07:56:16.122744 kubelet[2709]: E1027 07:56:16.122711 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 07:56:16.122820 kubelet[2709]: E1027 07:56:16.122751 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 07:56:16.122935 kubelet[2709]: E1027 07:56:16.122829 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9dc749786-p5zz4_calico-system(3942c5e9-567d-4de9-af49-f62592fa9e2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:16.122935 kubelet[2709]: E1027 07:56:16.122864 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc749786-p5zz4" podUID="3942c5e9-567d-4de9-af49-f62592fa9e2d" Oct 27 07:56:18.079807 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:34732.service - OpenSSH per-connection server daemon (10.0.0.1:34732). Oct 27 07:56:18.150450 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 34732 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:18.151753 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:18.157090 systemd-logind[1548]: New session 13 of user core. Oct 27 07:56:18.166509 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 07:56:18.303703 sshd[5358]: Connection closed by 10.0.0.1 port 34732 Oct 27 07:56:18.304413 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:18.313796 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:34732.service: Deactivated successfully. Oct 27 07:56:18.316481 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 07:56:18.318074 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Oct 27 07:56:18.320132 systemd-logind[1548]: Removed session 13. Oct 27 07:56:18.321950 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:34746.service - OpenSSH per-connection server daemon (10.0.0.1:34746). Oct 27 07:56:18.386812 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 34746 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:18.388109 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:18.392648 systemd-logind[1548]: New session 14 of user core. Oct 27 07:56:18.406510 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 07:56:18.606252 sshd[5374]: Connection closed by 10.0.0.1 port 34746 Oct 27 07:56:18.607000 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:18.618775 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:34746.service: Deactivated successfully. Oct 27 07:56:18.621277 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 07:56:18.623550 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Oct 27 07:56:18.624680 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:34762.service - OpenSSH per-connection server daemon (10.0.0.1:34762). Oct 27 07:56:18.626196 systemd-logind[1548]: Removed session 14. Oct 27 07:56:18.682692 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 34762 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:18.684093 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:18.688390 systemd-logind[1548]: New session 15 of user core. Oct 27 07:56:18.698544 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 07:56:19.323351 sshd[5389]: Connection closed by 10.0.0.1 port 34762 Oct 27 07:56:19.324075 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:19.333072 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:34762.service: Deactivated successfully. Oct 27 07:56:19.336130 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 07:56:19.337388 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Oct 27 07:56:19.342803 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:45892.service - OpenSSH per-connection server daemon (10.0.0.1:45892). Oct 27 07:56:19.343557 systemd-logind[1548]: Removed session 15. Oct 27 07:56:19.402931 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 45892 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:19.404128 sshd-session[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:19.409384 systemd-logind[1548]: New session 16 of user core. Oct 27 07:56:19.416509 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 07:56:19.669238 containerd[1582]: time="2025-10-27T07:56:19.669125299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:19.674680 sshd[5419]: Connection closed by 10.0.0.1 port 45892 Oct 27 07:56:19.673379 sshd-session[5412]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:19.683199 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:45892.service: Deactivated successfully. Oct 27 07:56:19.684859 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 07:56:19.686810 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Oct 27 07:56:19.690027 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:45908.service - OpenSSH per-connection server daemon (10.0.0.1:45908). Oct 27 07:56:19.692028 systemd-logind[1548]: Removed session 16. Oct 27 07:56:19.755321 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 45908 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:19.756954 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:19.760973 systemd-logind[1548]: New session 17 of user core. Oct 27 07:56:19.773520 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 07:56:19.872668 containerd[1582]: time="2025-10-27T07:56:19.872626977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:19.873478 containerd[1582]: time="2025-10-27T07:56:19.873448893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:19.873542 containerd[1582]: time="2025-10-27T07:56:19.873511252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:19.873696 kubelet[2709]: E1027 07:56:19.873657 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:19.874510 kubelet[2709]: E1027 07:56:19.873709 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:19.874510 kubelet[2709]: E1027 07:56:19.873785 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68b45bdbf4-hbjcq_calico-apiserver(8197235e-fb1d-4c19-95db-1c579409d474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:19.874510 kubelet[2709]: E1027 07:56:19.873818 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:56:19.888323 sshd[5433]: Connection closed by 10.0.0.1 port 45908 Oct 27 07:56:19.888680 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:19.892303 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:45908.service: Deactivated successfully. Oct 27 07:56:19.894027 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 07:56:19.894765 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Oct 27 07:56:19.895731 systemd-logind[1548]: Removed session 17. Oct 27 07:56:21.666879 containerd[1582]: time="2025-10-27T07:56:21.666835278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 07:56:21.889527 containerd[1582]: time="2025-10-27T07:56:21.889476483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:21.890507 containerd[1582]: time="2025-10-27T07:56:21.890472198Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 07:56:21.890587 containerd[1582]: time="2025-10-27T07:56:21.890543917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:21.890800 kubelet[2709]: E1027 07:56:21.890734 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 07:56:21.890800 kubelet[2709]: E1027 07:56:21.890799 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 07:56:21.891114 kubelet[2709]: E1027 07:56:21.890968 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sfpcs_calico-system(656cce1b-d114-4468-aa23-f4cc0ed0fc43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:21.891114 kubelet[2709]: E1027 07:56:21.891009 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:56:21.891307 containerd[1582]: time="2025-10-27T07:56:21.891283713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 07:56:22.114756 containerd[1582]: time="2025-10-27T07:56:22.114704124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:22.115670 containerd[1582]: time="2025-10-27T07:56:22.115633478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 07:56:22.115722 containerd[1582]: time="2025-10-27T07:56:22.115694358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 07:56:22.115913 kubelet[2709]: E1027 07:56:22.115863 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 07:56:22.115971 kubelet[2709]: E1027 07:56:22.115917 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 07:56:22.116023 kubelet[2709]: E1027 07:56:22.115998 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-78d9d46969-qr569_calico-system(c70e5e6e-faf5-4f92-89ce-19004e63b56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:22.116130 kubelet[2709]: E1027 07:56:22.116033 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" podUID="c70e5e6e-faf5-4f92-89ce-19004e63b56f" Oct 27 07:56:22.670637 containerd[1582]: time="2025-10-27T07:56:22.670593148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:22.912050 containerd[1582]: time="2025-10-27T07:56:22.911942786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:22.912991 containerd[1582]: time="2025-10-27T07:56:22.912951260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:22.913051 containerd[1582]: time="2025-10-27T07:56:22.913018020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:22.913229 kubelet[2709]: E1027 07:56:22.913173 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:22.913669 kubelet[2709]: E1027 07:56:22.913241 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:22.913669 kubelet[2709]: E1027 07:56:22.913522 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68b45bdbf4-4zxr4_calico-apiserver(d6332fd8-67a0-4328-a949-abb03ff66ef6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:22.913669 kubelet[2709]: E1027 07:56:22.913560 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6" Oct 27 07:56:22.914103 containerd[1582]: time="2025-10-27T07:56:22.913910855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 07:56:23.121075 containerd[1582]: time="2025-10-27T07:56:23.121020376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:23.122089 containerd[1582]: time="2025-10-27T07:56:23.122050490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 07:56:23.122187 containerd[1582]: time="2025-10-27T07:56:23.122148170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 07:56:23.122365 kubelet[2709]: E1027 07:56:23.122309 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:23.122413 kubelet[2709]: E1027 07:56:23.122379 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 07:56:23.122475 kubelet[2709]: E1027 07:56:23.122457 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-557595484f-vw9q8_calico-apiserver(9fb2f423-4788-4c5c-9c0e-a84c0b4825df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:23.122520 kubelet[2709]: E1027 07:56:23.122491 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:56:24.667548 containerd[1582]: time="2025-10-27T07:56:24.667511698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 07:56:24.879724 containerd[1582]: time="2025-10-27T07:56:24.879673373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:24.880480 containerd[1582]: time="2025-10-27T07:56:24.880442249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 07:56:24.880556 containerd[1582]: time="2025-10-27T07:56:24.880535928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 07:56:24.880724 kubelet[2709]: E1027 07:56:24.880683 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 07:56:24.881197 kubelet[2709]: E1027 07:56:24.880736 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 07:56:24.881197 kubelet[2709]: E1027 07:56:24.880824 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:24.881665 containerd[1582]: time="2025-10-27T07:56:24.881643642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 07:56:24.909146 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:45918.service - OpenSSH per-connection server daemon (10.0.0.1:45918). Oct 27 07:56:24.974004 sshd[5458]: Accepted publickey for core from 10.0.0.1 port 45918 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:24.975363 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:24.979918 systemd-logind[1548]: New session 18 of user core. Oct 27 07:56:24.985498 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 07:56:25.085192 containerd[1582]: time="2025-10-27T07:56:25.085076610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 07:56:25.087233 containerd[1582]: time="2025-10-27T07:56:25.087180719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 07:56:25.087294 containerd[1582]: time="2025-10-27T07:56:25.087277438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 07:56:25.087503 kubelet[2709]: E1027 07:56:25.087438 2709 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 07:56:25.087569 kubelet[2709]: E1027 07:56:25.087503 2709 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 07:56:25.087603 kubelet[2709]: E1027 07:56:25.087582 2709 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7vvvz_calico-system(cea2b2ed-e433-4f33-b71d-afa53cd98b5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 07:56:25.087658 kubelet[2709]: E1027 07:56:25.087621 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:56:25.100573 sshd[5461]: Connection closed by 10.0.0.1 port 45918 Oct 27 07:56:25.099394 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:25.103079 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:45918.service: Deactivated successfully. Oct 27 07:56:25.104801 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 07:56:25.105565 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Oct 27 07:56:25.106393 systemd-logind[1548]: Removed session 18. Oct 27 07:56:27.667575 kubelet[2709]: E1027 07:56:27.667363 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9dc749786-p5zz4" podUID="3942c5e9-567d-4de9-af49-f62592fa9e2d" Oct 27 07:56:30.115579 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:43190.service - OpenSSH per-connection server daemon (10.0.0.1:43190). Oct 27 07:56:30.162678 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 43190 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:30.163926 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:30.170401 systemd-logind[1548]: New session 19 of user core. Oct 27 07:56:30.178561 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 07:56:30.292786 sshd[5479]: Connection closed by 10.0.0.1 port 43190 Oct 27 07:56:30.293277 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:30.297067 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:43190.service: Deactivated successfully. Oct 27 07:56:30.300474 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 07:56:30.301320 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Oct 27 07:56:30.302596 systemd-logind[1548]: Removed session 19. Oct 27 07:56:32.667781 kubelet[2709]: E1027 07:56:32.667660 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-hbjcq" podUID="8197235e-fb1d-4c19-95db-1c579409d474" Oct 27 07:56:33.667213 kubelet[2709]: E1027 07:56:33.667100 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-557595484f-vw9q8" podUID="9fb2f423-4788-4c5c-9c0e-a84c0b4825df" Oct 27 07:56:33.667213 kubelet[2709]: E1027 07:56:33.667172 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d9d46969-qr569" podUID="c70e5e6e-faf5-4f92-89ce-19004e63b56f" Oct 27 07:56:35.306210 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:43198.service - OpenSSH per-connection server daemon (10.0.0.1:43198). Oct 27 07:56:35.353845 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 43198 ssh2: RSA SHA256:If8NnUZhKRY4ikQPnKzeC36xYZxE244DqvwVTFk9H74 Oct 27 07:56:35.355961 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 07:56:35.363299 systemd-logind[1548]: New session 20 of user core. Oct 27 07:56:35.368597 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 07:56:35.502376 sshd[5508]: Connection closed by 10.0.0.1 port 43198 Oct 27 07:56:35.502196 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Oct 27 07:56:35.506302 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:43198.service: Deactivated successfully. Oct 27 07:56:35.508056 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 07:56:35.508959 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Oct 27 07:56:35.510236 systemd-logind[1548]: Removed session 20. Oct 27 07:56:35.666769 kubelet[2709]: E1027 07:56:35.666500 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sfpcs" podUID="656cce1b-d114-4468-aa23-f4cc0ed0fc43" Oct 27 07:56:36.670439 kubelet[2709]: E1027 07:56:36.670389 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7vvvz" podUID="cea2b2ed-e433-4f33-b71d-afa53cd98b5f" Oct 27 07:56:36.671030 kubelet[2709]: E1027 07:56:36.670838 2709 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b45bdbf4-4zxr4" podUID="d6332fd8-67a0-4328-a949-abb03ff66ef6"