Oct 28 23:21:56.393637 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 28 23:21:56.393661 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Oct 28 21:26:42 -00 2025 Oct 28 23:21:56.393670 kernel: KASLR enabled Oct 28 23:21:56.393676 kernel: efi: EFI v2.7 by EDK II Oct 28 23:21:56.393682 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Oct 28 23:21:56.393687 kernel: random: crng init done Oct 28 23:21:56.393695 kernel: secureboot: Secure boot disabled Oct 28 23:21:56.393701 kernel: ACPI: Early table checksum verification disabled Oct 28 23:21:56.393708 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Oct 28 23:21:56.393714 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 28 23:21:56.393720 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393726 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393732 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393739 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393748 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393754 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393761 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393767 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393774 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 28 23:21:56.393780 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 28 23:21:56.393786 kernel: ACPI: Use ACPI SPCR as default console: No Oct 28 23:21:56.393793 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:21:56.393800 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Oct 28 23:21:56.393807 kernel: Zone ranges: Oct 28 23:21:56.393813 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:21:56.393820 kernel: DMA32 empty Oct 28 23:21:56.393826 kernel: Normal empty Oct 28 23:21:56.393832 kernel: Device empty Oct 28 23:21:56.393838 kernel: Movable zone start for each node Oct 28 23:21:56.393845 kernel: Early memory node ranges Oct 28 23:21:56.393851 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Oct 28 23:21:56.393857 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Oct 28 23:21:56.393864 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Oct 28 23:21:56.393870 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Oct 28 23:21:56.393878 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Oct 28 23:21:56.393884 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Oct 28 23:21:56.393891 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Oct 28 23:21:56.393897 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Oct 28 23:21:56.393903 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Oct 28 23:21:56.393910 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 28 23:21:56.393921 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 28 23:21:56.393928 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 28 23:21:56.393934 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 28 23:21:56.393941 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 28 23:21:56.393948 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 28 23:21:56.393955 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Oct 28 23:21:56.393962 kernel: psci: probing for conduit method from ACPI. Oct 28 23:21:56.393968 kernel: psci: PSCIv1.1 detected in firmware. Oct 28 23:21:56.393976 kernel: psci: Using standard PSCI v0.2 function IDs Oct 28 23:21:56.393983 kernel: psci: Trusted OS migration not required Oct 28 23:21:56.393990 kernel: psci: SMC Calling Convention v1.1 Oct 28 23:21:56.393997 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 28 23:21:56.394004 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 28 23:21:56.394010 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 28 23:21:56.394026 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 28 23:21:56.394033 kernel: Detected PIPT I-cache on CPU0 Oct 28 23:21:56.394040 kernel: CPU features: detected: GIC system register CPU interface Oct 28 23:21:56.394047 kernel: CPU features: detected: Spectre-v4 Oct 28 23:21:56.394053 kernel: CPU features: detected: Spectre-BHB Oct 28 23:21:56.394061 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 28 23:21:56.394068 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 28 23:21:56.394075 kernel: CPU features: detected: ARM erratum 1418040 Oct 28 23:21:56.394082 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 28 23:21:56.394089 kernel: alternatives: applying boot alternatives Oct 28 23:21:56.394097 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d4a291c245609e5c237181e704ec1c7ec0a6d72eca92291e03117b7440b9f526 Oct 28 23:21:56.394104 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 28 23:21:56.394111 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 28 23:21:56.394129 kernel: Fallback order for Node 0: 0 Oct 28 23:21:56.394136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Oct 28 23:21:56.394159 kernel: Policy zone: DMA Oct 28 23:21:56.394165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 28 23:21:56.394172 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Oct 28 23:21:56.394179 kernel: software IO TLB: area num 4. Oct 28 23:21:56.394185 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Oct 28 23:21:56.394192 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Oct 28 23:21:56.394199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 28 23:21:56.394206 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 28 23:21:56.394213 kernel: rcu: RCU event tracing is enabled. Oct 28 23:21:56.394220 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 28 23:21:56.394227 kernel: Trampoline variant of Tasks RCU enabled. Oct 28 23:21:56.394236 kernel: Tracing variant of Tasks RCU enabled. Oct 28 23:21:56.394243 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 28 23:21:56.394250 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 28 23:21:56.394257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:21:56.394264 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 28 23:21:56.394271 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 28 23:21:56.394278 kernel: GICv3: 256 SPIs implemented Oct 28 23:21:56.394284 kernel: GICv3: 0 Extended SPIs implemented Oct 28 23:21:56.394291 kernel: Root IRQ handler: gic_handle_irq Oct 28 23:21:56.394298 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 28 23:21:56.394305 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 28 23:21:56.394313 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 28 23:21:56.394321 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 28 23:21:56.394328 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Oct 28 23:21:56.394335 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Oct 28 23:21:56.394342 kernel: GICv3: using LPI property table @0x0000000040130000 Oct 28 23:21:56.394349 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Oct 28 23:21:56.394356 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 28 23:21:56.394363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:21:56.394369 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 28 23:21:56.394376 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 28 23:21:56.394383 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 28 23:21:56.394392 kernel: arm-pv: using stolen time PV Oct 28 23:21:56.394399 kernel: Console: colour dummy device 80x25 Oct 28 23:21:56.394407 kernel: ACPI: Core revision 20240827 Oct 28 23:21:56.394414 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 28 23:21:56.394421 kernel: pid_max: default: 32768 minimum: 301 Oct 28 23:21:56.394428 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 28 23:21:56.394435 kernel: landlock: Up and running. Oct 28 23:21:56.394442 kernel: SELinux: Initializing. Oct 28 23:21:56.394451 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:21:56.394458 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 28 23:21:56.394465 kernel: rcu: Hierarchical SRCU implementation. Oct 28 23:21:56.394473 kernel: rcu: Max phase no-delay instances is 400. Oct 28 23:21:56.394480 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 28 23:21:56.394487 kernel: Remapping and enabling EFI services. Oct 28 23:21:56.394494 kernel: smp: Bringing up secondary CPUs ... Oct 28 23:21:56.394503 kernel: Detected PIPT I-cache on CPU1 Oct 28 23:21:56.394514 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 28 23:21:56.394523 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Oct 28 23:21:56.394531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:21:56.394539 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 28 23:21:56.394546 kernel: Detected PIPT I-cache on CPU2 Oct 28 23:21:56.394554 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 28 23:21:56.394562 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Oct 28 23:21:56.394570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:21:56.394577 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 28 23:21:56.394585 kernel: Detected PIPT I-cache on CPU3 Oct 28 23:21:56.394593 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 28 23:21:56.394600 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Oct 28 23:21:56.394608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 28 23:21:56.394616 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 28 23:21:56.394624 kernel: smp: Brought up 1 node, 4 CPUs Oct 28 23:21:56.394632 kernel: SMP: Total of 4 processors activated. Oct 28 23:21:56.394639 kernel: CPU: All CPU(s) started at EL1 Oct 28 23:21:56.394646 kernel: CPU features: detected: 32-bit EL0 Support Oct 28 23:21:56.394654 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 28 23:21:56.394662 kernel: CPU features: detected: Common not Private translations Oct 28 23:21:56.394671 kernel: CPU features: detected: CRC32 instructions Oct 28 23:21:56.394678 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 28 23:21:56.394686 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 28 23:21:56.394693 kernel: CPU features: detected: LSE atomic instructions Oct 28 23:21:56.394701 kernel: CPU features: detected: Privileged Access Never Oct 28 23:21:56.394708 kernel: CPU features: detected: RAS Extension Support Oct 28 23:21:56.394716 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 28 23:21:56.394723 kernel: alternatives: applying system-wide alternatives Oct 28 23:21:56.394732 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Oct 28 23:21:56.394740 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Oct 28 23:21:56.394747 kernel: devtmpfs: initialized Oct 28 23:21:56.394755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 28 23:21:56.394763 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 28 23:21:56.394770 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 28 23:21:56.394778 kernel: 0 pages in range for non-PLT usage Oct 28 23:21:56.394786 kernel: 515056 pages in range for PLT usage Oct 28 23:21:56.394794 kernel: pinctrl core: initialized pinctrl subsystem Oct 28 23:21:56.394801 kernel: SMBIOS 3.0.0 present. Oct 28 23:21:56.394809 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 28 23:21:56.394817 kernel: DMI: Memory slots populated: 1/1 Oct 28 23:21:56.394824 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 28 23:21:56.394832 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 28 23:21:56.394841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 28 23:21:56.394849 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 28 23:21:56.394856 kernel: audit: initializing netlink subsys (disabled) Oct 28 23:21:56.394864 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Oct 28 23:21:56.394871 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 28 23:21:56.394879 kernel: cpuidle: using governor menu Oct 28 23:21:56.394886 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 28 23:21:56.394895 kernel: ASID allocator initialised with 32768 entries Oct 28 23:21:56.394903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 28 23:21:56.394910 kernel: Serial: AMBA PL011 UART driver Oct 28 23:21:56.394925 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 28 23:21:56.394933 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 28 23:21:56.394940 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 28 23:21:56.394948 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 28 23:21:56.394955 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 28 23:21:56.394965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 28 23:21:56.394973 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 28 23:21:56.394980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 28 23:21:56.394987 kernel: ACPI: Added _OSI(Module Device) Oct 28 23:21:56.394995 kernel: ACPI: Added _OSI(Processor Device) Oct 28 23:21:56.395003 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 28 23:21:56.395010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 28 23:21:56.395025 kernel: ACPI: Interpreter enabled Oct 28 23:21:56.395033 kernel: ACPI: Using GIC for interrupt routing Oct 28 23:21:56.395041 kernel: ACPI: MCFG table detected, 1 entries Oct 28 23:21:56.395049 kernel: ACPI: CPU0 has been hot-added Oct 28 23:21:56.395057 kernel: ACPI: CPU1 has been hot-added Oct 28 23:21:56.395064 kernel: ACPI: CPU2 has been hot-added Oct 28 23:21:56.395072 kernel: ACPI: CPU3 has been hot-added Oct 28 23:21:56.395082 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 28 23:21:56.395090 kernel: printk: legacy console [ttyAMA0] enabled Oct 28 23:21:56.395097 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 28 23:21:56.395283 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 28 23:21:56.395386 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 28 23:21:56.395485 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 28 23:21:56.395581 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 28 23:21:56.395673 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 28 23:21:56.395697 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 28 23:21:56.395705 kernel: PCI host bridge to bus 0000:00 Oct 28 23:21:56.395826 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 28 23:21:56.395907 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 28 23:21:56.395985 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 28 23:21:56.396068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 28 23:21:56.396182 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Oct 28 23:21:56.396275 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 28 23:21:56.396364 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Oct 28 23:21:56.396448 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Oct 28 23:21:56.396531 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Oct 28 23:21:56.396612 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Oct 28 23:21:56.396693 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Oct 28 23:21:56.396774 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Oct 28 23:21:56.396849 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 28 23:21:56.396924 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 28 23:21:56.396998 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 28 23:21:56.397008 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 28 23:21:56.397024 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 28 23:21:56.397032 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 28 23:21:56.397040 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 28 23:21:56.397048 kernel: iommu: Default domain type: Translated Oct 28 23:21:56.397057 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 28 23:21:56.397065 kernel: efivars: Registered efivars operations Oct 28 23:21:56.397072 kernel: vgaarb: loaded Oct 28 23:21:56.397080 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 28 23:21:56.397088 kernel: VFS: Disk quotas dquot_6.6.0 Oct 28 23:21:56.397095 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 28 23:21:56.397103 kernel: pnp: PnP ACPI init Oct 28 23:21:56.397210 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 28 23:21:56.397221 kernel: pnp: PnP ACPI: found 1 devices Oct 28 23:21:56.397229 kernel: NET: Registered PF_INET protocol family Oct 28 23:21:56.397237 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 28 23:21:56.397245 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 28 23:21:56.397253 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 28 23:21:56.397261 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 28 23:21:56.397271 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 28 23:21:56.397279 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 28 23:21:56.397286 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:21:56.397294 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 28 23:21:56.397301 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 28 23:21:56.397309 kernel: PCI: CLS 0 bytes, default 64 Oct 28 23:21:56.397316 kernel: kvm [1]: HYP mode not available Oct 28 23:21:56.397326 kernel: Initialise system trusted keyrings Oct 28 23:21:56.397333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 28 23:21:56.397341 kernel: Key type asymmetric registered Oct 28 23:21:56.397348 kernel: Asymmetric key parser 'x509' registered Oct 28 23:21:56.397356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 28 23:21:56.397363 kernel: io scheduler mq-deadline registered Oct 28 23:21:56.397371 kernel: io scheduler kyber registered Oct 28 23:21:56.397380 kernel: io scheduler bfq registered Oct 28 23:21:56.397388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 28 23:21:56.397396 kernel: ACPI: button: Power Button [PWRB] Oct 28 23:21:56.397404 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 28 23:21:56.397486 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 28 23:21:56.397497 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 28 23:21:56.397505 kernel: thunder_xcv, ver 1.0 Oct 28 23:21:56.397514 kernel: thunder_bgx, ver 1.0 Oct 28 23:21:56.397521 kernel: nicpf, ver 1.0 Oct 28 23:21:56.397529 kernel: nicvf, ver 1.0 Oct 28 23:21:56.397633 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 28 23:21:56.397713 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-28T23:21:55 UTC (1761693715) Oct 28 23:21:56.397723 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 28 23:21:56.397733 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Oct 28 23:21:56.397741 kernel: watchdog: NMI not fully supported Oct 28 23:21:56.397748 kernel: watchdog: Hard watchdog permanently disabled Oct 28 23:21:56.397756 kernel: NET: Registered PF_INET6 protocol family Oct 28 23:21:56.397763 kernel: Segment Routing with IPv6 Oct 28 23:21:56.397771 kernel: In-situ OAM (IOAM) with IPv6 Oct 28 23:21:56.397778 kernel: NET: Registered PF_PACKET protocol family Oct 28 23:21:56.397786 kernel: Key type dns_resolver registered Oct 28 23:21:56.397795 kernel: registered taskstats version 1 Oct 28 23:21:56.397802 kernel: Loading compiled-in X.509 certificates Oct 28 23:21:56.397810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6fcb7d180c1be2ee10062a730ec189aabf489c70' Oct 28 23:21:56.397817 kernel: Demotion targets for Node 0: null Oct 28 23:21:56.397825 kernel: Key type .fscrypt registered Oct 28 23:21:56.397832 kernel: Key type fscrypt-provisioning registered Oct 28 23:21:56.397840 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 28 23:21:56.397849 kernel: ima: Allocated hash algorithm: sha1 Oct 28 23:21:56.397856 kernel: ima: No architecture policies found Oct 28 23:21:56.397864 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 28 23:21:56.397872 kernel: clk: Disabling unused clocks Oct 28 23:21:56.397879 kernel: PM: genpd: Disabling unused power domains Oct 28 23:21:56.397887 kernel: Freeing unused kernel memory: 12992K Oct 28 23:21:56.397894 kernel: Run /init as init process Oct 28 23:21:56.397903 kernel: with arguments: Oct 28 23:21:56.397910 kernel: /init Oct 28 23:21:56.397918 kernel: with environment: Oct 28 23:21:56.397925 kernel: HOME=/ Oct 28 23:21:56.397933 kernel: TERM=linux Oct 28 23:21:56.398037 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 28 23:21:56.398142 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 28 23:21:56.398156 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 28 23:21:56.398164 kernel: GPT:16515071 != 27000831 Oct 28 23:21:56.398172 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 28 23:21:56.398179 kernel: GPT:16515071 != 27000831 Oct 28 23:21:56.398186 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 28 23:21:56.398194 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 28 23:21:56.398203 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398211 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398219 kernel: SCSI subsystem initialized Oct 28 23:21:56.398227 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 28 23:21:56.398242 kernel: device-mapper: uevent: version 1.0.3 Oct 28 23:21:56.398250 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 28 23:21:56.398259 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 28 23:21:56.398266 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398273 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398280 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398288 kernel: raid6: neonx8 gen() 15767 MB/s Oct 28 23:21:56.398295 kernel: raid6: neonx4 gen() 15817 MB/s Oct 28 23:21:56.398303 kernel: raid6: neonx2 gen() 13212 MB/s Oct 28 23:21:56.398311 kernel: raid6: neonx1 gen() 10432 MB/s Oct 28 23:21:56.398319 kernel: raid6: int64x8 gen() 6912 MB/s Oct 28 23:21:56.398327 kernel: raid6: int64x4 gen() 7353 MB/s Oct 28 23:21:56.398334 kernel: raid6: int64x2 gen() 6102 MB/s Oct 28 23:21:56.398342 kernel: raid6: int64x1 gen() 5043 MB/s Oct 28 23:21:56.398349 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Oct 28 23:21:56.398357 kernel: raid6: .... xor() 12352 MB/s, rmw enabled Oct 28 23:21:56.398364 kernel: raid6: using neon recovery algorithm Oct 28 23:21:56.398374 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398381 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398389 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398396 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398403 kernel: xor: measuring software checksum speed Oct 28 23:21:56.398411 kernel: 8regs : 21630 MB/sec Oct 28 23:21:56.398418 kernel: 32regs : 20902 MB/sec Oct 28 23:21:56.398426 kernel: arm64_neon : 25584 MB/sec Oct 28 23:21:56.398435 kernel: xor: using function: arm64_neon (25584 MB/sec) Oct 28 23:21:56.398442 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398449 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 28 23:21:56.398457 kernel: BTRFS: device fsid a3ab90fd-8914-4fc1-b889-c46e416b99c2 devid 1 transid 43 /dev/mapper/usr (253:0) scanned by mount (203) Oct 28 23:21:56.398465 kernel: BTRFS info (device dm-0): first mount of filesystem a3ab90fd-8914-4fc1-b889-c46e416b99c2 Oct 28 23:21:56.398473 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:21:56.398481 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 28 23:21:56.398490 kernel: BTRFS info (device dm-0): enabling free space tree Oct 28 23:21:56.398498 kernel: Invalid ELF header magic: != \u007fELF Oct 28 23:21:56.398505 kernel: loop: module loaded Oct 28 23:21:56.398512 kernel: loop0: detected capacity change from 0 to 91480 Oct 28 23:21:56.398520 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 28 23:21:56.398529 systemd[1]: Successfully made /usr/ read-only. Oct 28 23:21:56.398540 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:21:56.398550 systemd[1]: Detected virtualization kvm. Oct 28 23:21:56.398558 systemd[1]: Detected architecture arm64. Oct 28 23:21:56.398566 systemd[1]: Running in initrd. Oct 28 23:21:56.398574 systemd[1]: No hostname configured, using default hostname. Oct 28 23:21:56.398582 systemd[1]: Hostname set to . Oct 28 23:21:56.398590 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 23:21:56.398599 systemd[1]: Queued start job for default target initrd.target. Oct 28 23:21:56.398607 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 23:21:56.398615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:21:56.398623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:21:56.398632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 28 23:21:56.398640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:21:56.398650 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 28 23:21:56.398664 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 28 23:21:56.398675 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:21:56.398684 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:21:56.398694 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:21:56.398702 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:21:56.398710 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:21:56.398719 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:21:56.398727 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:21:56.398736 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:21:56.398744 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:21:56.398754 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 28 23:21:56.398762 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 28 23:21:56.398771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:21:56.398779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:21:56.398788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:21:56.398796 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:21:56.398805 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 28 23:21:56.398814 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 28 23:21:56.398823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:21:56.398832 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 28 23:21:56.398841 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 28 23:21:56.398850 systemd[1]: Starting systemd-fsck-usr.service... Oct 28 23:21:56.398858 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:21:56.398866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:21:56.398876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:21:56.398885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 28 23:21:56.398894 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:21:56.398903 systemd[1]: Finished systemd-fsck-usr.service. Oct 28 23:21:56.398912 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 23:21:56.398939 systemd-journald[344]: Collecting audit messages is disabled. Oct 28 23:21:56.398962 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 28 23:21:56.398972 systemd-journald[344]: Journal started Oct 28 23:21:56.398990 systemd-journald[344]: Runtime Journal (/run/log/journal/1a0365e7ef4b4f91801b8efaf490781e) is 6M, max 48.5M, 42.4M free. Oct 28 23:21:56.407213 kernel: Bridge firewalling registered Oct 28 23:21:56.400303 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 28 23:21:56.410353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:21:56.413454 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:21:56.413564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:21:56.417541 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 28 23:21:56.419582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:21:56.422250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:21:56.432629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:21:56.437353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:21:56.440761 systemd-tmpfiles[365]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 28 23:21:56.447372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:21:56.450138 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:21:56.452552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:21:56.455650 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 28 23:21:56.458108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:21:56.460260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:21:56.480469 dracut-cmdline[384]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d4a291c245609e5c237181e704ec1c7ec0a6d72eca92291e03117b7440b9f526 Oct 28 23:21:56.503938 systemd-resolved[385]: Positive Trust Anchors: Oct 28 23:21:56.503953 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:21:56.503956 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 23:21:56.503987 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:21:56.529158 systemd-resolved[385]: Defaulting to hostname 'linux'. Oct 28 23:21:56.531217 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:21:56.532463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:21:56.562151 kernel: Loading iSCSI transport class v2.0-870. Oct 28 23:21:56.571166 kernel: iscsi: registered transport (tcp) Oct 28 23:21:56.584546 kernel: iscsi: registered transport (qla4xxx) Oct 28 23:21:56.584588 kernel: QLogic iSCSI HBA Driver Oct 28 23:21:56.606277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:21:56.634749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:21:56.637754 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:21:56.685188 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 28 23:21:56.687736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 28 23:21:56.690649 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 28 23:21:56.722754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:21:56.725560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:21:56.759977 systemd-udevd[625]: Using default interface naming scheme 'v257'. Oct 28 23:21:56.767961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:21:56.772204 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 28 23:21:56.796434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:21:56.799572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:21:56.802597 dracut-pre-trigger[700]: rd.md=0: removing MD RAID activation Oct 28 23:21:56.829902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:21:56.832647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:21:56.844891 systemd-networkd[734]: lo: Link UP Oct 28 23:21:56.844900 systemd-networkd[734]: lo: Gained carrier Oct 28 23:21:56.845420 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:21:56.846884 systemd[1]: Reached target network.target - Network. Oct 28 23:21:56.891376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:21:56.894260 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 28 23:21:56.931591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 28 23:21:56.949534 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 28 23:21:56.962411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 28 23:21:56.971573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:21:56.975361 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 28 23:21:56.976779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:21:56.976893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:21:56.979535 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:21:56.993488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:21:56.994977 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:21:56.994981 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:21:56.997654 systemd-networkd[734]: eth0: Link UP Oct 28 23:21:56.997817 systemd-networkd[734]: eth0: Gained carrier Oct 28 23:21:56.997831 systemd-networkd[734]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:21:57.008626 disk-uuid[802]: Primary Header is updated. Oct 28 23:21:57.008626 disk-uuid[802]: Secondary Entries is updated. Oct 28 23:21:57.008626 disk-uuid[802]: Secondary Header is updated. Oct 28 23:21:57.011212 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:21:57.016321 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 28 23:21:57.019051 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:21:57.025349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:21:57.028274 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:21:57.031849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 28 23:21:57.034986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:21:57.054755 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:21:57.312917 systemd-resolved[385]: Detected conflict on linux IN A 10.0.0.93 Oct 28 23:21:57.312934 systemd-resolved[385]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 28 23:21:58.038601 disk-uuid[807]: Warning: The kernel is still using the old partition table. Oct 28 23:21:58.038601 disk-uuid[807]: The new table will be used at the next reboot or after you Oct 28 23:21:58.038601 disk-uuid[807]: run partprobe(8) or kpartx(8) Oct 28 23:21:58.038601 disk-uuid[807]: The operation has completed successfully. Oct 28 23:21:58.047260 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 28 23:21:58.047391 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 28 23:21:58.049702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 28 23:21:58.081141 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (833) Oct 28 23:21:58.081179 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:21:58.083227 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:21:58.086140 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:21:58.086160 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:21:58.092143 kernel: BTRFS info (device vda6): last unmount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:21:58.092430 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 28 23:21:58.094859 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 28 23:21:58.183605 ignition[852]: Ignition 2.22.0 Oct 28 23:21:58.183624 ignition[852]: Stage: fetch-offline Oct 28 23:21:58.183671 ignition[852]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:21:58.183681 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:21:58.183837 ignition[852]: parsed url from cmdline: "" Oct 28 23:21:58.183840 ignition[852]: no config URL provided Oct 28 23:21:58.183845 ignition[852]: reading system config file "/usr/lib/ignition/user.ign" Oct 28 23:21:58.183853 ignition[852]: no config at "/usr/lib/ignition/user.ign" Oct 28 23:21:58.183891 ignition[852]: op(1): [started] loading QEMU firmware config module Oct 28 23:21:58.183895 ignition[852]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 28 23:21:58.189110 ignition[852]: op(1): [finished] loading QEMU firmware config module Oct 28 23:21:58.234799 ignition[852]: parsing config with SHA512: 50d19744f22dbac7289e9d6681f8f1d8af1dbacba987fe89f5530620479f56cbcb50f3cba47fd2f94c4625e3f42a028cc0119fcd3620202bdaf1908703a43f5d Oct 28 23:21:58.241596 unknown[852]: fetched base config from "system" Oct 28 23:21:58.241611 unknown[852]: fetched user config from "qemu" Oct 28 23:21:58.242100 ignition[852]: fetch-offline: fetch-offline passed Oct 28 23:21:58.244246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:21:58.242185 ignition[852]: Ignition finished successfully Oct 28 23:21:58.246400 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 28 23:21:58.247280 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 28 23:21:58.278254 ignition[867]: Ignition 2.22.0 Oct 28 23:21:58.278272 ignition[867]: Stage: kargs Oct 28 23:21:58.278419 ignition[867]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:21:58.278428 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:21:58.279219 ignition[867]: kargs: kargs passed Oct 28 23:21:58.279269 ignition[867]: Ignition finished successfully Oct 28 23:21:58.284230 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 28 23:21:58.286453 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 28 23:21:58.332059 ignition[875]: Ignition 2.22.0 Oct 28 23:21:58.332076 ignition[875]: Stage: disks Oct 28 23:21:58.332243 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 28 23:21:58.332251 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:21:58.333025 ignition[875]: disks: disks passed Oct 28 23:21:58.333074 ignition[875]: Ignition finished successfully Oct 28 23:21:58.337180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 28 23:21:58.338961 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 28 23:21:58.340777 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 28 23:21:58.342995 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:21:58.345147 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:21:58.346395 systemd-networkd[734]: eth0: Gained IPv6LL Oct 28 23:21:58.347296 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:21:58.350200 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 28 23:21:58.392026 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 28 23:21:58.396405 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 28 23:21:58.399287 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 28 23:21:58.468138 kernel: EXT4-fs (vda9): mounted filesystem 9b30c517-6c40-4d45-aee4-76eeb6795508 r/w with ordered data mode. Quota mode: none. Oct 28 23:21:58.468575 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 28 23:21:58.469936 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 28 23:21:58.472621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:21:58.474430 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 28 23:21:58.475500 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 28 23:21:58.475535 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 28 23:21:58.475564 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:21:58.494877 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 28 23:21:58.497666 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 28 23:21:58.503154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Oct 28 23:21:58.503179 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:21:58.503194 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:21:58.504717 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:21:58.504744 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:21:58.505960 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:21:58.538812 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Oct 28 23:21:58.543521 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Oct 28 23:21:58.547069 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Oct 28 23:21:58.551445 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Oct 28 23:21:58.626357 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 28 23:21:58.629084 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 28 23:21:58.631867 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 28 23:21:58.656760 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 28 23:21:58.659459 kernel: BTRFS info (device vda6): last unmount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:21:58.671191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 28 23:21:58.686668 ignition[1007]: INFO : Ignition 2.22.0 Oct 28 23:21:58.686668 ignition[1007]: INFO : Stage: mount Oct 28 23:21:58.688541 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:21:58.688541 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:21:58.688541 ignition[1007]: INFO : mount: mount passed Oct 28 23:21:58.688541 ignition[1007]: INFO : Ignition finished successfully Oct 28 23:21:58.689271 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 28 23:21:58.691635 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 28 23:21:59.472209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 28 23:21:59.504611 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Oct 28 23:21:59.504658 kernel: BTRFS info (device vda6): first mount of filesystem 66a0df79-0e4b-404d-a037-85d2c30f12b4 Oct 28 23:21:59.504669 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 28 23:21:59.508695 kernel: BTRFS info (device vda6): turning on async discard Oct 28 23:21:59.508725 kernel: BTRFS info (device vda6): enabling free space tree Oct 28 23:21:59.510055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 28 23:21:59.551790 ignition[1037]: INFO : Ignition 2.22.0 Oct 28 23:21:59.551790 ignition[1037]: INFO : Stage: files Oct 28 23:21:59.553684 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:21:59.553684 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:21:59.553684 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping Oct 28 23:21:59.553684 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 28 23:21:59.553684 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 28 23:21:59.560562 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 28 23:21:59.560562 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 28 23:21:59.560562 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 28 23:21:59.560562 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:21:59.560562 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 28 23:21:59.556639 unknown[1037]: wrote ssh authorized keys file for user: core Oct 28 23:21:59.648242 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 28 23:21:59.868186 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 28 23:21:59.868186 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 28 23:21:59.868186 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 28 23:21:59.868186 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 28 23:21:59.877376 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 28 23:21:59.893085 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 28 23:21:59.893085 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 28 23:21:59.893085 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Oct 28 23:22:00.410364 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 28 23:22:01.238507 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Oct 28 23:22:01.238507 ignition[1037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 28 23:22:01.243016 ignition[1037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 28 23:22:01.245714 ignition[1037]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 28 23:22:01.260613 ignition[1037]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:22:01.264629 ignition[1037]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 28 23:22:01.266471 ignition[1037]: INFO : files: files passed Oct 28 23:22:01.266471 ignition[1037]: INFO : Ignition finished successfully Oct 28 23:22:01.269208 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 28 23:22:01.272296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 28 23:22:01.275779 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 28 23:22:01.289527 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 28 23:22:01.289636 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 28 23:22:01.295873 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Oct 28 23:22:01.299029 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:22:01.300683 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:22:01.302243 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 28 23:22:01.301223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:22:01.303967 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 28 23:22:01.306085 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 28 23:22:01.358293 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 28 23:22:01.358429 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 28 23:22:01.360916 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 28 23:22:01.362873 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 28 23:22:01.365025 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 28 23:22:01.365963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 28 23:22:01.391225 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:22:01.393977 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 28 23:22:01.414486 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 28 23:22:01.414634 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:22:01.417110 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:22:01.419422 systemd[1]: Stopped target timers.target - Timer Units. Oct 28 23:22:01.421338 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 28 23:22:01.421477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 28 23:22:01.424164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 28 23:22:01.426314 systemd[1]: Stopped target basic.target - Basic System. Oct 28 23:22:01.428064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 28 23:22:01.429927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 28 23:22:01.432035 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 28 23:22:01.434241 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 28 23:22:01.436434 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 28 23:22:01.438462 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 28 23:22:01.440470 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 28 23:22:01.442518 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 28 23:22:01.444313 systemd[1]: Stopped target swap.target - Swaps. Oct 28 23:22:01.445922 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 28 23:22:01.446076 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 28 23:22:01.448522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:22:01.450583 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:22:01.452598 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 28 23:22:01.456214 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:22:01.457501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 28 23:22:01.457630 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 28 23:22:01.460627 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 28 23:22:01.460760 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 28 23:22:01.462853 systemd[1]: Stopped target paths.target - Path Units. Oct 28 23:22:01.464519 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 28 23:22:01.465463 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:22:01.466822 systemd[1]: Stopped target slices.target - Slice Units. Oct 28 23:22:01.468468 systemd[1]: Stopped target sockets.target - Socket Units. Oct 28 23:22:01.470248 systemd[1]: iscsid.socket: Deactivated successfully. Oct 28 23:22:01.470339 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 28 23:22:01.472592 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 28 23:22:01.472685 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 28 23:22:01.474333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 28 23:22:01.474449 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 28 23:22:01.476317 systemd[1]: ignition-files.service: Deactivated successfully. Oct 28 23:22:01.476425 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 28 23:22:01.478855 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 28 23:22:01.482844 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 28 23:22:01.485030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 28 23:22:01.485175 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:22:01.487294 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 28 23:22:01.487407 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:22:01.489261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 28 23:22:01.489370 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 28 23:22:01.496287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 28 23:22:01.496381 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 28 23:22:01.504839 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 28 23:22:01.508133 ignition[1095]: INFO : Ignition 2.22.0 Oct 28 23:22:01.508133 ignition[1095]: INFO : Stage: umount Oct 28 23:22:01.509881 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 28 23:22:01.509881 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 28 23:22:01.509881 ignition[1095]: INFO : umount: umount passed Oct 28 23:22:01.509881 ignition[1095]: INFO : Ignition finished successfully Oct 28 23:22:01.511110 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 28 23:22:01.511244 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 28 23:22:01.513630 systemd[1]: Stopped target network.target - Network. Oct 28 23:22:01.515207 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 28 23:22:01.515283 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 28 23:22:01.517295 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 28 23:22:01.517349 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 28 23:22:01.519199 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 28 23:22:01.519251 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 28 23:22:01.521231 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 28 23:22:01.521277 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 28 23:22:01.523103 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 28 23:22:01.525003 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 28 23:22:01.529812 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 28 23:22:01.529920 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 28 23:22:01.534633 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 28 23:22:01.534728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 28 23:22:01.539056 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 28 23:22:01.541331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 28 23:22:01.541365 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:22:01.544699 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 28 23:22:01.545755 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 28 23:22:01.545828 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 28 23:22:01.548180 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 28 23:22:01.548224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:22:01.550094 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 28 23:22:01.550150 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 28 23:22:01.552223 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:22:01.555357 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 28 23:22:01.556210 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 28 23:22:01.557638 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 28 23:22:01.557720 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 28 23:22:01.570010 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 28 23:22:01.574291 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:22:01.575982 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 28 23:22:01.576033 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 28 23:22:01.578030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 28 23:22:01.578063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:22:01.579963 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 28 23:22:01.580028 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 28 23:22:01.582903 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 28 23:22:01.582953 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 28 23:22:01.585875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 28 23:22:01.585933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 28 23:22:01.589801 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 28 23:22:01.591293 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 28 23:22:01.591356 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:22:01.593412 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 28 23:22:01.593460 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:22:01.595785 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 28 23:22:01.595836 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:22:01.598160 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 28 23:22:01.598207 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:22:01.600364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 28 23:22:01.600413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:22:01.603311 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 28 23:22:01.605278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 28 23:22:01.610487 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 28 23:22:01.610577 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 28 23:22:01.612060 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 28 23:22:01.614831 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 28 23:22:01.629900 systemd[1]: Switching root. Oct 28 23:22:01.661462 systemd-journald[344]: Journal stopped Oct 28 23:22:02.457913 systemd-journald[344]: Received SIGTERM from PID 1 (systemd). Oct 28 23:22:02.457969 kernel: SELinux: policy capability network_peer_controls=1 Oct 28 23:22:02.457995 kernel: SELinux: policy capability open_perms=1 Oct 28 23:22:02.458006 kernel: SELinux: policy capability extended_socket_class=1 Oct 28 23:22:02.458016 kernel: SELinux: policy capability always_check_network=0 Oct 28 23:22:02.458026 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 28 23:22:02.458039 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 28 23:22:02.458052 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 28 23:22:02.458062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 28 23:22:02.458072 kernel: SELinux: policy capability userspace_initial_context=0 Oct 28 23:22:02.458082 kernel: audit: type=1403 audit(1761693721.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 28 23:22:02.458093 systemd[1]: Successfully loaded SELinux policy in 62.007ms. Oct 28 23:22:02.458110 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.329ms. Oct 28 23:22:02.458135 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 28 23:22:02.458148 systemd[1]: Detected virtualization kvm. Oct 28 23:22:02.458159 systemd[1]: Detected architecture arm64. Oct 28 23:22:02.458170 systemd[1]: Detected first boot. Oct 28 23:22:02.458180 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 28 23:22:02.458193 zram_generator::config[1139]: No configuration found. Oct 28 23:22:02.458205 kernel: NET: Registered PF_VSOCK protocol family Oct 28 23:22:02.458215 systemd[1]: Populated /etc with preset unit settings. Oct 28 23:22:02.458228 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 28 23:22:02.458238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 28 23:22:02.458249 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 28 23:22:02.458261 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 28 23:22:02.458273 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 28 23:22:02.458283 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 28 23:22:02.458294 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 28 23:22:02.458310 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 28 23:22:02.458320 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 28 23:22:02.458331 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 28 23:22:02.458341 systemd[1]: Created slice user.slice - User and Session Slice. Oct 28 23:22:02.458352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 28 23:22:02.458363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 28 23:22:02.458373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 28 23:22:02.458385 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 28 23:22:02.458396 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 28 23:22:02.458407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 28 23:22:02.458419 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 28 23:22:02.458429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 28 23:22:02.458440 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 28 23:22:02.458452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 28 23:22:02.458463 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 28 23:22:02.458474 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 28 23:22:02.458485 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 28 23:22:02.458496 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 28 23:22:02.458506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 28 23:22:02.458517 systemd[1]: Reached target slices.target - Slice Units. Oct 28 23:22:02.458529 systemd[1]: Reached target swap.target - Swaps. Oct 28 23:22:02.458540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 28 23:22:02.458551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 28 23:22:02.458561 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 28 23:22:02.458572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 28 23:22:02.458583 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 28 23:22:02.458594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 28 23:22:02.458607 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 28 23:22:02.458617 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 28 23:22:02.458628 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 28 23:22:02.458639 systemd[1]: Mounting media.mount - External Media Directory... Oct 28 23:22:02.458650 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 28 23:22:02.458660 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 28 23:22:02.458671 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 28 23:22:02.458683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 28 23:22:02.458694 systemd[1]: Reached target machines.target - Containers. Oct 28 23:22:02.458705 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 28 23:22:02.458715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:22:02.458726 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 28 23:22:02.458737 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 28 23:22:02.458748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:22:02.458759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:22:02.458771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:22:02.458782 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 28 23:22:02.458792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:22:02.458803 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 28 23:22:02.458813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 28 23:22:02.458824 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 28 23:22:02.458839 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 28 23:22:02.458851 systemd[1]: Stopped systemd-fsck-usr.service. Oct 28 23:22:02.458862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:22:02.458873 kernel: fuse: init (API version 7.41) Oct 28 23:22:02.458882 kernel: ACPI: bus type drm_connector registered Oct 28 23:22:02.458892 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 28 23:22:02.458903 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 28 23:22:02.458916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 28 23:22:02.458927 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 28 23:22:02.458938 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 28 23:22:02.458948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 28 23:22:02.458985 systemd-journald[1221]: Collecting audit messages is disabled. Oct 28 23:22:02.459012 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 28 23:22:02.459024 systemd-journald[1221]: Journal started Oct 28 23:22:02.459044 systemd-journald[1221]: Runtime Journal (/run/log/journal/1a0365e7ef4b4f91801b8efaf490781e) is 6M, max 48.5M, 42.4M free. Oct 28 23:22:02.224038 systemd[1]: Queued start job for default target multi-user.target. Oct 28 23:22:02.249218 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 28 23:22:02.249686 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 28 23:22:02.461771 systemd[1]: Started systemd-journald.service - Journal Service. Oct 28 23:22:02.462741 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 28 23:22:02.464079 systemd[1]: Mounted media.mount - External Media Directory. Oct 28 23:22:02.465250 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 28 23:22:02.466412 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 28 23:22:02.467610 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 28 23:22:02.468962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 28 23:22:02.472164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 28 23:22:02.473605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 28 23:22:02.473778 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 28 23:22:02.475302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:22:02.475475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:22:02.476824 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:22:02.477014 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:22:02.478437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:22:02.478613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:22:02.480270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 28 23:22:02.480445 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 28 23:22:02.481764 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:22:02.481930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:22:02.483421 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 28 23:22:02.484930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 28 23:22:02.487112 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 28 23:22:02.488880 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 28 23:22:02.502220 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 28 23:22:02.503902 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 28 23:22:02.506413 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 28 23:22:02.508668 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 28 23:22:02.509942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 28 23:22:02.509993 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 28 23:22:02.512029 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 28 23:22:02.513856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:22:02.523085 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 28 23:22:02.525346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 28 23:22:02.526629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:22:02.527692 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 28 23:22:02.529048 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:22:02.530278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 28 23:22:02.537287 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 28 23:22:02.538279 systemd-journald[1221]: Time spent on flushing to /var/log/journal/1a0365e7ef4b4f91801b8efaf490781e is 17.227ms for 885 entries. Oct 28 23:22:02.538279 systemd-journald[1221]: System Journal (/var/log/journal/1a0365e7ef4b4f91801b8efaf490781e) is 8M, max 163.5M, 155.5M free. Oct 28 23:22:02.563415 systemd-journald[1221]: Received client request to flush runtime journal. Oct 28 23:22:02.563456 kernel: loop1: detected capacity change from 0 to 119400 Oct 28 23:22:02.540721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 28 23:22:02.542947 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 28 23:22:02.545246 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 28 23:22:02.547073 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 28 23:22:02.551135 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 28 23:22:02.554165 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 28 23:22:02.560283 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 28 23:22:02.563406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 28 23:22:02.568381 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 28 23:22:02.581149 kernel: loop2: detected capacity change from 0 to 100192 Oct 28 23:22:02.582215 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 28 23:22:02.585078 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Oct 28 23:22:02.585440 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Oct 28 23:22:02.590906 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 28 23:22:02.595788 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 28 23:22:02.615178 kernel: loop3: detected capacity change from 0 to 211168 Oct 28 23:22:02.632310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 28 23:22:02.635350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 28 23:22:02.638191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 28 23:22:02.639212 kernel: loop4: detected capacity change from 0 to 119400 Oct 28 23:22:02.645146 kernel: loop5: detected capacity change from 0 to 100192 Oct 28 23:22:02.646348 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 28 23:22:02.651137 kernel: loop6: detected capacity change from 0 to 211168 Oct 28 23:22:02.655430 (sd-merge)[1277]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 28 23:22:02.658711 (sd-merge)[1277]: Merged extensions into '/usr'. Oct 28 23:22:02.664468 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Oct 28 23:22:02.664491 systemd[1]: Reloading... Oct 28 23:22:02.677080 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 28 23:22:02.677125 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Oct 28 23:22:02.711575 zram_generator::config[1309]: No configuration found. Oct 28 23:22:02.762833 systemd-resolved[1276]: Positive Trust Anchors: Oct 28 23:22:02.762858 systemd-resolved[1276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 28 23:22:02.762862 systemd-resolved[1276]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 28 23:22:02.762894 systemd-resolved[1276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 28 23:22:02.769195 systemd-resolved[1276]: Defaulting to hostname 'linux'. Oct 28 23:22:02.859596 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 28 23:22:02.860019 systemd[1]: Reloading finished in 195 ms. Oct 28 23:22:02.890891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 28 23:22:02.892417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 28 23:22:02.894023 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 28 23:22:02.897755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 28 23:22:02.901948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 28 23:22:02.917452 systemd[1]: Starting ensure-sysext.service... Oct 28 23:22:02.919528 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 28 23:22:02.934785 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 28 23:22:02.934820 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 28 23:22:02.935052 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 28 23:22:02.935228 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 28 23:22:02.935813 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 28 23:22:02.936002 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Oct 28 23:22:02.936051 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Oct 28 23:22:02.938219 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Oct 28 23:22:02.938238 systemd[1]: Reloading... Oct 28 23:22:02.940214 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:22:02.940228 systemd-tmpfiles[1351]: Skipping /boot Oct 28 23:22:02.946594 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Oct 28 23:22:02.946611 systemd-tmpfiles[1351]: Skipping /boot Oct 28 23:22:02.989216 zram_generator::config[1387]: No configuration found. Oct 28 23:22:03.112247 systemd[1]: Reloading finished in 173 ms. Oct 28 23:22:03.131867 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 28 23:22:03.146086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 28 23:22:03.154253 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:22:03.156851 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 28 23:22:03.174533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 28 23:22:03.177040 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 28 23:22:03.181754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 28 23:22:03.185486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 28 23:22:03.191943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:22:03.195459 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 28 23:22:03.199540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 28 23:22:03.203079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 28 23:22:03.204442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:22:03.204570 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:22:03.206596 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:22:03.206759 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:22:03.206843 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:22:03.212800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 28 23:22:03.214811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 28 23:22:03.217705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 28 23:22:03.217755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 28 23:22:03.218419 systemd[1]: Finished ensure-sysext.service. Oct 28 23:22:03.219946 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 28 23:22:03.221959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 28 23:22:03.222166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 28 23:22:03.226476 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 28 23:22:03.227214 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 28 23:22:03.229245 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 28 23:22:03.229435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 28 23:22:03.232953 systemd-udevd[1422]: Using default interface naming scheme 'v257'. Oct 28 23:22:03.234596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 28 23:22:03.235305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 28 23:22:03.239737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 28 23:22:03.239943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 28 23:22:03.243447 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 28 23:22:03.248178 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 28 23:22:03.255335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 28 23:22:03.267377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 28 23:22:03.271479 augenrules[1474]: No rules Oct 28 23:22:03.275005 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 28 23:22:03.276792 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 28 23:22:03.277665 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:22:03.279213 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:22:03.356170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 28 23:22:03.357861 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 28 23:22:03.360717 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 28 23:22:03.362433 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 28 23:22:03.364341 systemd[1]: Reached target time-set.target - System Time Set. Oct 28 23:22:03.377515 systemd-networkd[1480]: lo: Link UP Oct 28 23:22:03.377523 systemd-networkd[1480]: lo: Gained carrier Oct 28 23:22:03.378254 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 28 23:22:03.379455 systemd[1]: Reached target network.target - Network. Oct 28 23:22:03.381421 systemd-networkd[1480]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:22:03.381437 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 28 23:22:03.382023 systemd-networkd[1480]: eth0: Link UP Oct 28 23:22:03.382154 systemd-networkd[1480]: eth0: Gained carrier Oct 28 23:22:03.382174 systemd-networkd[1480]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 28 23:22:03.382981 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 28 23:22:03.386579 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 28 23:22:03.390203 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 28 23:22:03.399325 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 28 23:22:03.400239 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Oct 28 23:22:03.401335 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 28 23:22:03.401654 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-10-28 23:22:03.609789 UTC. Oct 28 23:22:03.409261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 28 23:22:03.488382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 28 23:22:03.491182 ldconfig[1419]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 28 23:22:03.498196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 28 23:22:03.503232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 28 23:22:03.515196 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 28 23:22:03.536290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 28 23:22:03.538940 systemd[1]: Reached target sysinit.target - System Initialization. Oct 28 23:22:03.540267 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 28 23:22:03.541631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 28 23:22:03.543164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 28 23:22:03.544433 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 28 23:22:03.545749 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 28 23:22:03.547075 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 28 23:22:03.547114 systemd[1]: Reached target paths.target - Path Units. Oct 28 23:22:03.548075 systemd[1]: Reached target timers.target - Timer Units. Oct 28 23:22:03.550253 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 28 23:22:03.552715 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 28 23:22:03.555774 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 28 23:22:03.557331 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 28 23:22:03.558691 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 28 23:22:03.564098 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 28 23:22:03.565510 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 28 23:22:03.567446 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 28 23:22:03.568673 systemd[1]: Reached target sockets.target - Socket Units. Oct 28 23:22:03.569712 systemd[1]: Reached target basic.target - Basic System. Oct 28 23:22:03.570759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:22:03.570791 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 28 23:22:03.571889 systemd[1]: Starting containerd.service - containerd container runtime... Oct 28 23:22:03.574111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 28 23:22:03.576246 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 28 23:22:03.578527 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 28 23:22:03.580916 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 28 23:22:03.582773 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 28 23:22:03.584005 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 28 23:22:03.585777 jq[1528]: false Oct 28 23:22:03.586619 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 28 23:22:03.589287 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 28 23:22:03.592296 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 28 23:22:03.595794 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 28 23:22:03.596911 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 28 23:22:03.597521 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 28 23:22:03.598082 systemd[1]: Starting update-engine.service - Update Engine... Oct 28 23:22:03.599414 extend-filesystems[1529]: Found /dev/vda6 Oct 28 23:22:03.604475 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 28 23:22:03.606108 extend-filesystems[1529]: Found /dev/vda9 Oct 28 23:22:03.609624 extend-filesystems[1529]: Checking size of /dev/vda9 Oct 28 23:22:03.609164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 28 23:22:03.610762 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 28 23:22:03.610940 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 28 23:22:03.611266 systemd[1]: motdgen.service: Deactivated successfully. Oct 28 23:22:03.611680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 28 23:22:03.617320 jq[1545]: true Oct 28 23:22:03.617586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 28 23:22:03.617796 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 28 23:22:03.624271 extend-filesystems[1529]: Resized partition /dev/vda9 Oct 28 23:22:03.628666 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Oct 28 23:22:03.633242 update_engine[1543]: I20251028 23:22:03.632295 1543 main.cc:92] Flatcar Update Engine starting Oct 28 23:22:03.636234 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 28 23:22:03.642303 jq[1559]: true Oct 28 23:22:03.646165 tar[1555]: linux-arm64/LICENSE Oct 28 23:22:03.646379 tar[1555]: linux-arm64/helm Oct 28 23:22:03.666750 dbus-daemon[1526]: [system] SELinux support is enabled Oct 28 23:22:03.666999 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 28 23:22:03.670801 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 28 23:22:03.670842 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 28 23:22:03.672557 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 28 23:22:03.672585 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 28 23:22:03.675214 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 28 23:22:03.677273 systemd[1]: Started update-engine.service - Update Engine. Oct 28 23:22:03.681353 update_engine[1543]: I20251028 23:22:03.680914 1543 update_check_scheduler.cc:74] Next update check in 5m6s Oct 28 23:22:03.679667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 28 23:22:03.689551 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 28 23:22:03.689551 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 28 23:22:03.689551 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 28 23:22:03.720286 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Oct 28 23:22:03.722414 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Oct 28 23:22:03.692101 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 28 23:22:03.692321 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 28 23:22:03.702823 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (Power Button) Oct 28 23:22:03.703139 systemd-logind[1542]: New seat seat0. Oct 28 23:22:03.706580 systemd[1]: Started systemd-logind.service - User Login Management. Oct 28 23:22:03.711157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 28 23:22:03.714138 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 28 23:22:03.761797 locksmithd[1590]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 28 23:22:03.798256 containerd[1565]: time="2025-10-28T23:22:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 28 23:22:03.798837 containerd[1565]: time="2025-10-28T23:22:03.798783800Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809173440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.12µs" Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809251960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809277280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809552840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809575440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809600080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809652400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809664680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809949600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809966400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809989800Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811134 containerd[1565]: time="2025-10-28T23:22:03.809999800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810154320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810403640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810431760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810504320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810546720Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810834880Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 28 23:22:03.811367 containerd[1565]: time="2025-10-28T23:22:03.810946720Z" level=info msg="metadata content store policy set" policy=shared Oct 28 23:22:03.814187 containerd[1565]: time="2025-10-28T23:22:03.814145960Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 28 23:22:03.814252 containerd[1565]: time="2025-10-28T23:22:03.814207880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 28 23:22:03.814252 containerd[1565]: time="2025-10-28T23:22:03.814230200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 28 23:22:03.814252 containerd[1565]: time="2025-10-28T23:22:03.814242360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 28 23:22:03.814300 containerd[1565]: time="2025-10-28T23:22:03.814252640Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 28 23:22:03.814300 containerd[1565]: time="2025-10-28T23:22:03.814263160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 28 23:22:03.814300 containerd[1565]: time="2025-10-28T23:22:03.814276480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 28 23:22:03.814300 containerd[1565]: time="2025-10-28T23:22:03.814288120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 28 23:22:03.814388 containerd[1565]: time="2025-10-28T23:22:03.814364200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 28 23:22:03.814411 containerd[1565]: time="2025-10-28T23:22:03.814387360Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 28 23:22:03.814411 containerd[1565]: time="2025-10-28T23:22:03.814399080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 28 23:22:03.814448 containerd[1565]: time="2025-10-28T23:22:03.814411120Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 28 23:22:03.814544 containerd[1565]: time="2025-10-28T23:22:03.814524840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 28 23:22:03.814574 containerd[1565]: time="2025-10-28T23:22:03.814551600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 28 23:22:03.814574 containerd[1565]: time="2025-10-28T23:22:03.814568440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 28 23:22:03.814607 containerd[1565]: time="2025-10-28T23:22:03.814582280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 28 23:22:03.814607 containerd[1565]: time="2025-10-28T23:22:03.814600000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 28 23:22:03.814639 containerd[1565]: time="2025-10-28T23:22:03.814609760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 28 23:22:03.814639 containerd[1565]: time="2025-10-28T23:22:03.814620760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 28 23:22:03.814639 containerd[1565]: time="2025-10-28T23:22:03.814630400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 28 23:22:03.814691 containerd[1565]: time="2025-10-28T23:22:03.814640920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 28 23:22:03.814691 containerd[1565]: time="2025-10-28T23:22:03.814651280Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 28 23:22:03.814691 containerd[1565]: time="2025-10-28T23:22:03.814661120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 28 23:22:03.814964 containerd[1565]: time="2025-10-28T23:22:03.814898200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 28 23:22:03.814964 containerd[1565]: time="2025-10-28T23:22:03.814923000Z" level=info msg="Start snapshots syncer" Oct 28 23:22:03.814964 containerd[1565]: time="2025-10-28T23:22:03.814952480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 28 23:22:03.815336 containerd[1565]: time="2025-10-28T23:22:03.815242600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 28 23:22:03.815429 containerd[1565]: time="2025-10-28T23:22:03.815352000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 28 23:22:03.815429 containerd[1565]: time="2025-10-28T23:22:03.815423280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 28 23:22:03.815546 containerd[1565]: time="2025-10-28T23:22:03.815522400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 28 23:22:03.815571 containerd[1565]: time="2025-10-28T23:22:03.815550600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 28 23:22:03.815571 containerd[1565]: time="2025-10-28T23:22:03.815562600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 28 23:22:03.815604 containerd[1565]: time="2025-10-28T23:22:03.815581840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 28 23:22:03.815604 containerd[1565]: time="2025-10-28T23:22:03.815596680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 28 23:22:03.815636 containerd[1565]: time="2025-10-28T23:22:03.815606640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 28 23:22:03.815636 containerd[1565]: time="2025-10-28T23:22:03.815617720Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 28 23:22:03.815676 containerd[1565]: time="2025-10-28T23:22:03.815645760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 28 23:22:03.815676 containerd[1565]: time="2025-10-28T23:22:03.815656920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 28 23:22:03.815676 containerd[1565]: time="2025-10-28T23:22:03.815667600Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 28 23:22:03.815722 containerd[1565]: time="2025-10-28T23:22:03.815711400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:22:03.815740 containerd[1565]: time="2025-10-28T23:22:03.815726520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 28 23:22:03.815740 containerd[1565]: time="2025-10-28T23:22:03.815735200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:22:03.815776 containerd[1565]: time="2025-10-28T23:22:03.815744040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 28 23:22:03.815776 containerd[1565]: time="2025-10-28T23:22:03.815752040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 28 23:22:03.815776 containerd[1565]: time="2025-10-28T23:22:03.815761200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 28 23:22:03.815776 containerd[1565]: time="2025-10-28T23:22:03.815771080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 28 23:22:03.815864 containerd[1565]: time="2025-10-28T23:22:03.815849560Z" level=info msg="runtime interface created" Oct 28 23:22:03.815864 containerd[1565]: time="2025-10-28T23:22:03.815858640Z" level=info msg="created NRI interface" Oct 28 23:22:03.815908 containerd[1565]: time="2025-10-28T23:22:03.815868360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 28 23:22:03.815908 containerd[1565]: time="2025-10-28T23:22:03.815879920Z" level=info msg="Connect containerd service" Oct 28 23:22:03.815908 containerd[1565]: time="2025-10-28T23:22:03.815903800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 28 23:22:03.817136 containerd[1565]: time="2025-10-28T23:22:03.817099480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 28 23:22:03.901098 containerd[1565]: time="2025-10-28T23:22:03.900905800Z" level=info msg="Start subscribing containerd event" Oct 28 23:22:03.901098 containerd[1565]: time="2025-10-28T23:22:03.901049560Z" level=info msg="Start recovering state" Oct 28 23:22:03.901241 containerd[1565]: time="2025-10-28T23:22:03.901157800Z" level=info msg="Start event monitor" Oct 28 23:22:03.901241 containerd[1565]: time="2025-10-28T23:22:03.901172840Z" level=info msg="Start cni network conf syncer for default" Oct 28 23:22:03.901241 containerd[1565]: time="2025-10-28T23:22:03.901180440Z" level=info msg="Start streaming server" Oct 28 23:22:03.901292 containerd[1565]: time="2025-10-28T23:22:03.901254120Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 28 23:22:03.901292 containerd[1565]: time="2025-10-28T23:22:03.901261640Z" level=info msg="runtime interface starting up..." Oct 28 23:22:03.901292 containerd[1565]: time="2025-10-28T23:22:03.901267120Z" level=info msg="starting plugins..." Oct 28 23:22:03.901292 containerd[1565]: time="2025-10-28T23:22:03.901279720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 28 23:22:03.901950 containerd[1565]: time="2025-10-28T23:22:03.901915240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 28 23:22:03.902001 containerd[1565]: time="2025-10-28T23:22:03.901983080Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 28 23:22:03.903347 containerd[1565]: time="2025-10-28T23:22:03.902042320Z" level=info msg="containerd successfully booted in 0.104218s" Oct 28 23:22:03.902195 systemd[1]: Started containerd.service - containerd container runtime. Oct 28 23:22:03.982273 tar[1555]: linux-arm64/README.md Oct 28 23:22:04.010696 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 28 23:22:05.000301 systemd-networkd[1480]: eth0: Gained IPv6LL Oct 28 23:22:05.004206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 28 23:22:05.007304 systemd[1]: Reached target network-online.target - Network is Online. Oct 28 23:22:05.010737 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 28 23:22:05.013782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:05.028391 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 28 23:22:05.046959 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 28 23:22:05.047222 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 28 23:22:05.048979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 28 23:22:05.056278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 28 23:22:05.606089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:05.618291 sshd_keygen[1560]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 28 23:22:05.622451 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:22:05.638883 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 28 23:22:05.642372 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 28 23:22:05.667735 systemd[1]: issuegen.service: Deactivated successfully. Oct 28 23:22:05.667946 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 28 23:22:05.672872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 28 23:22:05.696973 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 28 23:22:05.701973 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 28 23:22:05.706425 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 28 23:22:05.707761 systemd[1]: Reached target getty.target - Login Prompts. Oct 28 23:22:05.709422 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 28 23:22:05.712221 systemd[1]: Startup finished in 1.221s (kernel) + 5.711s (initrd) + 3.921s (userspace) = 10.855s. Oct 28 23:22:05.975765 kubelet[1648]: E1028 23:22:05.975642 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:22:05.978410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:22:05.978552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:22:05.980243 systemd[1]: kubelet.service: Consumed 762ms CPU time, 257M memory peak. Oct 28 23:22:07.237925 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 28 23:22:07.239194 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:46020.service - OpenSSH per-connection server daemon (10.0.0.1:46020). Oct 28 23:22:07.315619 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 46020 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:07.317536 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.324176 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 28 23:22:07.325219 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 28 23:22:07.330554 systemd-logind[1542]: New session 1 of user core. Oct 28 23:22:07.345532 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 28 23:22:07.348050 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 28 23:22:07.366517 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.368936 systemd-logind[1542]: New session 2 of user core. Oct 28 23:22:07.471057 systemd[1683]: Queued start job for default target default.target. Oct 28 23:22:07.484281 systemd[1683]: Created slice app.slice - User Application Slice. Oct 28 23:22:07.484316 systemd[1683]: Reached target paths.target - Paths. Oct 28 23:22:07.484359 systemd[1683]: Reached target timers.target - Timers. Oct 28 23:22:07.485748 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 28 23:22:07.496522 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 28 23:22:07.496686 systemd[1683]: Reached target sockets.target - Sockets. Oct 28 23:22:07.496736 systemd[1683]: Reached target basic.target - Basic System. Oct 28 23:22:07.496782 systemd[1683]: Reached target default.target - Main User Target. Oct 28 23:22:07.496819 systemd[1683]: Startup finished in 121ms. Oct 28 23:22:07.497012 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 28 23:22:07.498574 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 28 23:22:07.509943 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:46030.service - OpenSSH per-connection server daemon (10.0.0.1:46030). Oct 28 23:22:07.582581 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 46030 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:07.583902 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.588202 systemd-logind[1542]: New session 3 of user core. Oct 28 23:22:07.604340 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 28 23:22:07.615491 sshd[1699]: Connection closed by 10.0.0.1 port 46030 Oct 28 23:22:07.615920 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:07.630277 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:46030.service: Deactivated successfully. Oct 28 23:22:07.632691 systemd[1]: session-3.scope: Deactivated successfully. Oct 28 23:22:07.633449 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Oct 28 23:22:07.636023 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:46042.service - OpenSSH per-connection server daemon (10.0.0.1:46042). Oct 28 23:22:07.636655 systemd-logind[1542]: Removed session 3. Oct 28 23:22:07.699080 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 46042 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:07.700521 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.705529 systemd-logind[1542]: New session 4 of user core. Oct 28 23:22:07.719348 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 28 23:22:07.727941 sshd[1710]: Connection closed by 10.0.0.1 port 46042 Oct 28 23:22:07.728265 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:07.742371 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:46042.service: Deactivated successfully. Oct 28 23:22:07.745613 systemd[1]: session-4.scope: Deactivated successfully. Oct 28 23:22:07.746360 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Oct 28 23:22:07.748938 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:46044.service - OpenSSH per-connection server daemon (10.0.0.1:46044). Oct 28 23:22:07.750207 systemd-logind[1542]: Removed session 4. Oct 28 23:22:07.814011 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 46044 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:07.815381 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.820214 systemd-logind[1542]: New session 5 of user core. Oct 28 23:22:07.829321 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 28 23:22:07.841852 sshd[1720]: Connection closed by 10.0.0.1 port 46044 Oct 28 23:22:07.842212 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:07.851023 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:46044.service: Deactivated successfully. Oct 28 23:22:07.853546 systemd[1]: session-5.scope: Deactivated successfully. Oct 28 23:22:07.854320 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Oct 28 23:22:07.857116 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:46054.service - OpenSSH per-connection server daemon (10.0.0.1:46054). Oct 28 23:22:07.857722 systemd-logind[1542]: Removed session 5. Oct 28 23:22:07.912470 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:07.913773 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:07.917830 systemd-logind[1542]: New session 6 of user core. Oct 28 23:22:07.930308 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 28 23:22:07.947403 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 28 23:22:07.947660 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:22:07.957075 sudo[1731]: pam_unix(sudo:session): session closed for user root Oct 28 23:22:07.958325 sshd[1730]: Connection closed by 10.0.0.1 port 46054 Oct 28 23:22:07.958815 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:07.969213 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:46054.service: Deactivated successfully. Oct 28 23:22:07.970820 systemd[1]: session-6.scope: Deactivated successfully. Oct 28 23:22:07.971612 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Oct 28 23:22:07.974054 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Oct 28 23:22:07.974564 systemd-logind[1542]: Removed session 6. Oct 28 23:22:08.027256 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:08.028516 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:08.033041 systemd-logind[1542]: New session 7 of user core. Oct 28 23:22:08.043281 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 28 23:22:08.055153 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 28 23:22:08.055394 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:22:08.059118 sudo[1744]: pam_unix(sudo:session): session closed for user root Oct 28 23:22:08.065053 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 28 23:22:08.065301 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:22:08.071855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 28 23:22:08.115017 augenrules[1768]: No rules Oct 28 23:22:08.116064 systemd[1]: audit-rules.service: Deactivated successfully. Oct 28 23:22:08.117262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 28 23:22:08.119203 sudo[1743]: pam_unix(sudo:session): session closed for user root Oct 28 23:22:08.120157 sshd[1742]: Connection closed by 10.0.0.1 port 46070 Oct 28 23:22:08.120612 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:08.133080 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:46070.service: Deactivated successfully. Oct 28 23:22:08.134619 systemd[1]: session-7.scope: Deactivated successfully. Oct 28 23:22:08.136827 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Oct 28 23:22:08.138889 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:46072.service - OpenSSH per-connection server daemon (10.0.0.1:46072). Oct 28 23:22:08.139536 systemd-logind[1542]: Removed session 7. Oct 28 23:22:08.202718 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 46072 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:22:08.203929 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:22:08.208215 systemd-logind[1542]: New session 8 of user core. Oct 28 23:22:08.214359 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 28 23:22:08.226494 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 28 23:22:08.226752 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 28 23:22:08.496111 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 28 23:22:08.515468 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 28 23:22:08.717660 dockerd[1804]: time="2025-10-28T23:22:08.717592348Z" level=info msg="Starting up" Oct 28 23:22:08.718829 dockerd[1804]: time="2025-10-28T23:22:08.718773524Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 28 23:22:08.730988 dockerd[1804]: time="2025-10-28T23:22:08.730879512Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 28 23:22:08.936919 dockerd[1804]: time="2025-10-28T23:22:08.936871978Z" level=info msg="Loading containers: start." Oct 28 23:22:08.945150 kernel: Initializing XFRM netlink socket Oct 28 23:22:09.168482 systemd-networkd[1480]: docker0: Link UP Oct 28 23:22:09.171964 dockerd[1804]: time="2025-10-28T23:22:09.171917525Z" level=info msg="Loading containers: done." Oct 28 23:22:09.183899 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4275490569-merged.mount: Deactivated successfully. Oct 28 23:22:09.185290 dockerd[1804]: time="2025-10-28T23:22:09.185229390Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 28 23:22:09.185386 dockerd[1804]: time="2025-10-28T23:22:09.185326835Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 28 23:22:09.185505 dockerd[1804]: time="2025-10-28T23:22:09.185485975Z" level=info msg="Initializing buildkit" Oct 28 23:22:09.208966 dockerd[1804]: time="2025-10-28T23:22:09.208843330Z" level=info msg="Completed buildkit initialization" Oct 28 23:22:09.214004 dockerd[1804]: time="2025-10-28T23:22:09.213944550Z" level=info msg="Daemon has completed initialization" Oct 28 23:22:09.214185 dockerd[1804]: time="2025-10-28T23:22:09.214030038Z" level=info msg="API listen on /run/docker.sock" Oct 28 23:22:09.214321 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 28 23:22:09.764154 containerd[1565]: time="2025-10-28T23:22:09.763750370Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 28 23:22:10.606880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293073149.mount: Deactivated successfully. Oct 28 23:22:11.787643 containerd[1565]: time="2025-10-28T23:22:11.787594331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:11.788350 containerd[1565]: time="2025-10-28T23:22:11.788314871Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Oct 28 23:22:11.789349 containerd[1565]: time="2025-10-28T23:22:11.789318480Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:11.792465 containerd[1565]: time="2025-10-28T23:22:11.792437079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:11.793253 containerd[1565]: time="2025-10-28T23:22:11.793213869Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.029421653s" Oct 28 23:22:11.793306 containerd[1565]: time="2025-10-28T23:22:11.793258158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Oct 28 23:22:11.794421 containerd[1565]: time="2025-10-28T23:22:11.794395199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 28 23:22:13.134233 containerd[1565]: time="2025-10-28T23:22:13.134151405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:13.135430 containerd[1565]: time="2025-10-28T23:22:13.135227211Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Oct 28 23:22:13.137171 containerd[1565]: time="2025-10-28T23:22:13.137107754Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:13.139902 containerd[1565]: time="2025-10-28T23:22:13.139871363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:13.140837 containerd[1565]: time="2025-10-28T23:22:13.140810627Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.346294829s" Oct 28 23:22:13.141023 containerd[1565]: time="2025-10-28T23:22:13.140925158Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Oct 28 23:22:13.141504 containerd[1565]: time="2025-10-28T23:22:13.141472897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 28 23:22:14.263177 containerd[1565]: time="2025-10-28T23:22:14.263105612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:14.263885 containerd[1565]: time="2025-10-28T23:22:14.263858056Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Oct 28 23:22:14.264784 containerd[1565]: time="2025-10-28T23:22:14.264745259Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:14.267075 containerd[1565]: time="2025-10-28T23:22:14.267028730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:14.268085 containerd[1565]: time="2025-10-28T23:22:14.268059631Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.126548122s" Oct 28 23:22:14.268140 containerd[1565]: time="2025-10-28T23:22:14.268090804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Oct 28 23:22:14.268813 containerd[1565]: time="2025-10-28T23:22:14.268780219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 28 23:22:15.321775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382036178.mount: Deactivated successfully. Oct 28 23:22:15.690714 containerd[1565]: time="2025-10-28T23:22:15.690582535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:15.691740 containerd[1565]: time="2025-10-28T23:22:15.691708249Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Oct 28 23:22:15.692575 containerd[1565]: time="2025-10-28T23:22:15.692515182Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:15.695783 containerd[1565]: time="2025-10-28T23:22:15.695733578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.426905284s" Oct 28 23:22:15.695861 containerd[1565]: time="2025-10-28T23:22:15.695794461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Oct 28 23:22:15.696402 containerd[1565]: time="2025-10-28T23:22:15.695950069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:15.696882 containerd[1565]: time="2025-10-28T23:22:15.696858849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 28 23:22:16.023215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 28 23:22:16.025468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:16.156477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:16.160681 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 28 23:22:16.258830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070674978.mount: Deactivated successfully. Oct 28 23:22:16.260309 kubelet[2106]: E1028 23:22:16.260137 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 28 23:22:16.264104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 28 23:22:16.264436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 28 23:22:16.265001 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.6M memory peak. Oct 28 23:22:17.177917 containerd[1565]: time="2025-10-28T23:22:17.177849988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:17.178391 containerd[1565]: time="2025-10-28T23:22:17.178359719Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Oct 28 23:22:17.179322 containerd[1565]: time="2025-10-28T23:22:17.179272453Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:17.181739 containerd[1565]: time="2025-10-28T23:22:17.181699552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:17.182930 containerd[1565]: time="2025-10-28T23:22:17.182807619Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.485915787s" Oct 28 23:22:17.182930 containerd[1565]: time="2025-10-28T23:22:17.182843985Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Oct 28 23:22:17.183415 containerd[1565]: time="2025-10-28T23:22:17.183375817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 28 23:22:17.593440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3086302545.mount: Deactivated successfully. Oct 28 23:22:17.597931 containerd[1565]: time="2025-10-28T23:22:17.597888482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:22:17.598555 containerd[1565]: time="2025-10-28T23:22:17.598524591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 28 23:22:17.599235 containerd[1565]: time="2025-10-28T23:22:17.599187985Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:22:17.601916 containerd[1565]: time="2025-10-28T23:22:17.601701237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 28 23:22:17.602383 containerd[1565]: time="2025-10-28T23:22:17.602360492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 418.945415ms" Oct 28 23:22:17.602434 containerd[1565]: time="2025-10-28T23:22:17.602391594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 28 23:22:17.602897 containerd[1565]: time="2025-10-28T23:22:17.602841692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 28 23:22:18.099590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337453774.mount: Deactivated successfully. Oct 28 23:22:20.249704 containerd[1565]: time="2025-10-28T23:22:20.249657034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:20.250636 containerd[1565]: time="2025-10-28T23:22:20.250263334Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Oct 28 23:22:20.251340 containerd[1565]: time="2025-10-28T23:22:20.251313677Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:20.254522 containerd[1565]: time="2025-10-28T23:22:20.254486090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:20.255679 containerd[1565]: time="2025-10-28T23:22:20.255644605Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.65276889s" Oct 28 23:22:20.255679 containerd[1565]: time="2025-10-28T23:22:20.255679271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Oct 28 23:22:25.674546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:25.674805 systemd[1]: kubelet.service: Consumed 148ms CPU time, 108.6M memory peak. Oct 28 23:22:25.676902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:25.704063 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-8.scope)... Oct 28 23:22:25.704084 systemd[1]: Reloading... Oct 28 23:22:25.788170 zram_generator::config[2297]: No configuration found. Oct 28 23:22:25.983253 systemd[1]: Reloading finished in 278 ms. Oct 28 23:22:26.052549 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 28 23:22:26.052628 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 28 23:22:26.054139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:26.054183 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95.1M memory peak. Oct 28 23:22:26.055438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:26.165785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:26.169230 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:22:26.198974 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:22:26.198974 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:22:26.198974 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:22:26.199242 kubelet[2345]: I1028 23:22:26.199015 2345 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:22:26.958153 kubelet[2345]: I1028 23:22:26.957404 2345 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 23:22:26.958153 kubelet[2345]: I1028 23:22:26.957433 2345 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:22:26.958153 kubelet[2345]: I1028 23:22:26.957617 2345 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:22:26.976642 kubelet[2345]: E1028 23:22:26.976614 2345 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 28 23:22:26.976952 kubelet[2345]: I1028 23:22:26.976934 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:22:26.986317 kubelet[2345]: I1028 23:22:26.986298 2345 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:22:26.988753 kubelet[2345]: I1028 23:22:26.988735 2345 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 23:22:26.989718 kubelet[2345]: I1028 23:22:26.989683 2345 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:22:26.989849 kubelet[2345]: I1028 23:22:26.989715 2345 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:22:26.989941 kubelet[2345]: I1028 23:22:26.989924 2345 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:22:26.989941 kubelet[2345]: I1028 23:22:26.989935 2345 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 23:22:26.990629 kubelet[2345]: I1028 23:22:26.990608 2345 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:22:26.993526 kubelet[2345]: I1028 23:22:26.993494 2345 kubelet.go:480] "Attempting to sync node with API server" Oct 28 23:22:26.993526 kubelet[2345]: I1028 23:22:26.993517 2345 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:22:26.993688 kubelet[2345]: I1028 23:22:26.993540 2345 kubelet.go:386] "Adding apiserver pod source" Oct 28 23:22:26.994948 kubelet[2345]: I1028 23:22:26.994517 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:22:26.996736 kubelet[2345]: E1028 23:22:26.996695 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 28 23:22:26.996858 kubelet[2345]: E1028 23:22:26.996837 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 28 23:22:26.996944 kubelet[2345]: I1028 23:22:26.996850 2345 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:22:26.997813 kubelet[2345]: I1028 23:22:26.997670 2345 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:22:26.997813 kubelet[2345]: W1028 23:22:26.997786 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 28 23:22:27.001033 kubelet[2345]: I1028 23:22:27.000578 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 23:22:27.001033 kubelet[2345]: I1028 23:22:27.000631 2345 server.go:1289] "Started kubelet" Oct 28 23:22:27.001033 kubelet[2345]: I1028 23:22:27.000765 2345 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:22:27.001876 kubelet[2345]: I1028 23:22:27.001786 2345 server.go:317] "Adding debug handlers to kubelet server" Oct 28 23:22:27.001969 kubelet[2345]: I1028 23:22:27.001943 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:22:27.003158 kubelet[2345]: I1028 23:22:27.003092 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:22:27.003522 kubelet[2345]: I1028 23:22:27.003500 2345 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:22:27.004138 kubelet[2345]: I1028 23:22:27.003980 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:22:27.004443 kubelet[2345]: E1028 23:22:27.004420 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:22:27.004479 kubelet[2345]: I1028 23:22:27.004451 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 23:22:27.004597 kubelet[2345]: I1028 23:22:27.004579 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 23:22:27.004638 kubelet[2345]: I1028 23:22:27.004626 2345 reconciler.go:26] "Reconciler: start to sync state" Oct 28 23:22:27.004946 kubelet[2345]: E1028 23:22:27.004906 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 28 23:22:27.005275 kubelet[2345]: I1028 23:22:27.005200 2345 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:22:27.005338 kubelet[2345]: I1028 23:22:27.005314 2345 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:22:27.005656 kubelet[2345]: E1028 23:22:27.005317 2345 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 23:22:27.005717 kubelet[2345]: E1028 23:22:27.005688 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Oct 28 23:22:27.006403 kubelet[2345]: I1028 23:22:27.006379 2345 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:22:27.007749 kubelet[2345]: E1028 23:22:27.005838 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cb2108ff4ab5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 23:22:27.000593077 +0000 UTC m=+0.828332458,LastTimestamp:2025-10-28 23:22:27.000593077 +0000 UTC m=+0.828332458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 23:22:27.018703 kubelet[2345]: I1028 23:22:27.018669 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:22:27.018703 kubelet[2345]: I1028 23:22:27.018686 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:22:27.018703 kubelet[2345]: I1028 23:22:27.018699 2345 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:22:27.020674 kubelet[2345]: I1028 23:22:27.020644 2345 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 23:22:27.021715 kubelet[2345]: I1028 23:22:27.021693 2345 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 23:22:27.021715 kubelet[2345]: I1028 23:22:27.021717 2345 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 23:22:27.021810 kubelet[2345]: I1028 23:22:27.021731 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:22:27.021810 kubelet[2345]: I1028 23:22:27.021738 2345 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 23:22:27.021810 kubelet[2345]: E1028 23:22:27.021770 2345 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:22:27.105421 kubelet[2345]: E1028 23:22:27.105359 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 28 23:22:27.105532 kubelet[2345]: I1028 23:22:27.105494 2345 policy_none.go:49] "None policy: Start" Oct 28 23:22:27.105532 kubelet[2345]: I1028 23:22:27.105510 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 23:22:27.105532 kubelet[2345]: I1028 23:22:27.105522 2345 state_mem.go:35] "Initializing new in-memory state store" Oct 28 23:22:27.105749 kubelet[2345]: E1028 23:22:27.105699 2345 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 28 23:22:27.110599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 28 23:22:27.122469 kubelet[2345]: E1028 23:22:27.122439 2345 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 28 23:22:27.125423 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 28 23:22:27.128007 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 28 23:22:27.140861 kubelet[2345]: E1028 23:22:27.140817 2345 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:22:27.141105 kubelet[2345]: I1028 23:22:27.141009 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:22:27.141105 kubelet[2345]: I1028 23:22:27.141021 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:22:27.141285 kubelet[2345]: I1028 23:22:27.141260 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:22:27.142415 kubelet[2345]: E1028 23:22:27.142286 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:22:27.142415 kubelet[2345]: E1028 23:22:27.142328 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 28 23:22:27.206747 kubelet[2345]: E1028 23:22:27.206714 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Oct 28 23:22:27.242748 kubelet[2345]: I1028 23:22:27.242663 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:22:27.243044 kubelet[2345]: E1028 23:22:27.242999 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 28 23:22:27.335234 systemd[1]: Created slice kubepods-burstable-pod75c3630a5f479de3d13ae4352411a925.slice - libcontainer container kubepods-burstable-pod75c3630a5f479de3d13ae4352411a925.slice. Oct 28 23:22:27.342765 kubelet[2345]: E1028 23:22:27.342740 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:27.345943 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 28 23:22:27.347157 kubelet[2345]: E1028 23:22:27.347109 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:27.348346 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 28 23:22:27.349616 kubelet[2345]: E1028 23:22:27.349597 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:27.407216 kubelet[2345]: I1028 23:22:27.407187 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:27.407283 kubelet[2345]: I1028 23:22:27.407220 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:27.407283 kubelet[2345]: I1028 23:22:27.407239 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:27.407283 kubelet[2345]: I1028 23:22:27.407255 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:27.407283 kubelet[2345]: I1028 23:22:27.407269 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:27.407283 kubelet[2345]: I1028 23:22:27.407284 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:27.407386 kubelet[2345]: I1028 23:22:27.407298 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:27.407386 kubelet[2345]: I1028 23:22:27.407311 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:27.407386 kubelet[2345]: I1028 23:22:27.407324 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:27.444300 kubelet[2345]: I1028 23:22:27.444157 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:22:27.444488 kubelet[2345]: E1028 23:22:27.444452 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 28 23:22:27.608066 kubelet[2345]: E1028 23:22:27.608026 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Oct 28 23:22:27.643888 kubelet[2345]: E1028 23:22:27.643568 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.644108 containerd[1565]: time="2025-10-28T23:22:27.644073766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75c3630a5f479de3d13ae4352411a925,Namespace:kube-system,Attempt:0,}" Oct 28 23:22:27.647841 kubelet[2345]: E1028 23:22:27.647677 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.648240 containerd[1565]: time="2025-10-28T23:22:27.648204111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 28 23:22:27.650676 kubelet[2345]: E1028 23:22:27.650521 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.651293 containerd[1565]: time="2025-10-28T23:22:27.651090996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 28 23:22:27.666542 containerd[1565]: time="2025-10-28T23:22:27.666508046Z" level=info msg="connecting to shim 5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59" address="unix:///run/containerd/s/e71b676f5df45ad66ea51058bf9abe27310d1cb31e6db0c24cdf26d9ad9552ac" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:27.678849 containerd[1565]: time="2025-10-28T23:22:27.678770127Z" level=info msg="connecting to shim 0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1" address="unix:///run/containerd/s/ebc4d8785ba4468b1d59b960d67c06d84fdd5997e9107587fa0cbb9bb83f57cd" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:27.680344 containerd[1565]: time="2025-10-28T23:22:27.680273982Z" level=info msg="connecting to shim 7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b" address="unix:///run/containerd/s/332b2ab6b036dd472a4200e44055469a4834effc01ae4e716ad6cbf40a02d81e" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:27.698280 systemd[1]: Started cri-containerd-5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59.scope - libcontainer container 5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59. Oct 28 23:22:27.702301 systemd[1]: Started cri-containerd-0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1.scope - libcontainer container 0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1. Oct 28 23:22:27.710472 systemd[1]: Started cri-containerd-7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b.scope - libcontainer container 7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b. Oct 28 23:22:27.740781 containerd[1565]: time="2025-10-28T23:22:27.740705327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:75c3630a5f479de3d13ae4352411a925,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59\"" Oct 28 23:22:27.741762 containerd[1565]: time="2025-10-28T23:22:27.741708778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1\"" Oct 28 23:22:27.741992 kubelet[2345]: E1028 23:22:27.741913 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.742801 kubelet[2345]: E1028 23:22:27.742653 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.745963 containerd[1565]: time="2025-10-28T23:22:27.745889184Z" level=info msg="CreateContainer within sandbox \"5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 28 23:22:27.747942 containerd[1565]: time="2025-10-28T23:22:27.747907541Z" level=info msg="CreateContainer within sandbox \"0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 28 23:22:27.755263 containerd[1565]: time="2025-10-28T23:22:27.755232062Z" level=info msg="Container 6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:22:27.756029 containerd[1565]: time="2025-10-28T23:22:27.756001871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b\"" Oct 28 23:22:27.757362 kubelet[2345]: E1028 23:22:27.757341 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:27.760177 containerd[1565]: time="2025-10-28T23:22:27.760091367Z" level=info msg="Container ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:22:27.761251 containerd[1565]: time="2025-10-28T23:22:27.761222253Z" level=info msg="CreateContainer within sandbox \"7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 28 23:22:27.767728 containerd[1565]: time="2025-10-28T23:22:27.767625141Z" level=info msg="CreateContainer within sandbox \"0f8300e124df5a14778f3467a667b8ffada64b18bb7d227546ad6d8a27700bc1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554\"" Oct 28 23:22:27.768559 containerd[1565]: time="2025-10-28T23:22:27.768534639Z" level=info msg="StartContainer for \"ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554\"" Oct 28 23:22:27.770647 containerd[1565]: time="2025-10-28T23:22:27.770619315Z" level=info msg="connecting to shim ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554" address="unix:///run/containerd/s/ebc4d8785ba4468b1d59b960d67c06d84fdd5997e9107587fa0cbb9bb83f57cd" protocol=ttrpc version=3 Oct 28 23:22:27.770824 containerd[1565]: time="2025-10-28T23:22:27.769436087Z" level=info msg="CreateContainer within sandbox \"5d382044357a4590676410f88a50d298bcecd5e93f482cfb252cfc92f7ea3d59\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d\"" Oct 28 23:22:27.771881 containerd[1565]: time="2025-10-28T23:22:27.771847198Z" level=info msg="Container 01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:22:27.772265 containerd[1565]: time="2025-10-28T23:22:27.772228818Z" level=info msg="StartContainer for \"6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d\"" Oct 28 23:22:27.773406 containerd[1565]: time="2025-10-28T23:22:27.773376844Z" level=info msg="connecting to shim 6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d" address="unix:///run/containerd/s/e71b676f5df45ad66ea51058bf9abe27310d1cb31e6db0c24cdf26d9ad9552ac" protocol=ttrpc version=3 Oct 28 23:22:27.779283 containerd[1565]: time="2025-10-28T23:22:27.779230590Z" level=info msg="CreateContainer within sandbox \"7b32517377b7b2ab68fbe47d0809ad80e636c963cc5115e34433660e3790db6b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35\"" Oct 28 23:22:27.780003 containerd[1565]: time="2025-10-28T23:22:27.779980976Z" level=info msg="StartContainer for \"01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35\"" Oct 28 23:22:27.781004 containerd[1565]: time="2025-10-28T23:22:27.780971692Z" level=info msg="connecting to shim 01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35" address="unix:///run/containerd/s/332b2ab6b036dd472a4200e44055469a4834effc01ae4e716ad6cbf40a02d81e" protocol=ttrpc version=3 Oct 28 23:22:27.798299 systemd[1]: Started cri-containerd-ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554.scope - libcontainer container ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554. Oct 28 23:22:27.802320 systemd[1]: Started cri-containerd-01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35.scope - libcontainer container 01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35. Oct 28 23:22:27.803239 systemd[1]: Started cri-containerd-6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d.scope - libcontainer container 6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d. Oct 28 23:22:27.848555 kubelet[2345]: I1028 23:22:27.848401 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:22:27.850550 kubelet[2345]: E1028 23:22:27.849299 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Oct 28 23:22:27.850754 containerd[1565]: time="2025-10-28T23:22:27.850679474Z" level=info msg="StartContainer for \"01e2e093e737028aaac143575aa29f8266a10c5e3c7ceae4634ddcb2fc2d7b35\" returns successfully" Oct 28 23:22:27.851213 containerd[1565]: time="2025-10-28T23:22:27.851174311Z" level=info msg="StartContainer for \"6b81e6387f6632c85739f77fec88135088319a1a13ee00faadb205ea01ab4e3d\" returns successfully" Oct 28 23:22:27.854379 containerd[1565]: time="2025-10-28T23:22:27.854348102Z" level=info msg="StartContainer for \"ee3f6956ee5769c22a26c4da7bb26e1620aae497ac906f24c6fb69c209f22554\" returns successfully" Oct 28 23:22:28.034695 kubelet[2345]: E1028 23:22:28.034506 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:28.034695 kubelet[2345]: E1028 23:22:28.034634 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:28.036200 kubelet[2345]: E1028 23:22:28.036178 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:28.036290 kubelet[2345]: E1028 23:22:28.036270 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:28.038812 kubelet[2345]: E1028 23:22:28.038791 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:28.038904 kubelet[2345]: E1028 23:22:28.038887 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:28.651092 kubelet[2345]: I1028 23:22:28.651057 2345 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:22:28.969653 kubelet[2345]: E1028 23:22:28.969537 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 28 23:22:28.996081 kubelet[2345]: I1028 23:22:28.996027 2345 apiserver.go:52] "Watching apiserver" Oct 28 23:22:29.041306 kubelet[2345]: E1028 23:22:29.041269 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:29.041429 kubelet[2345]: E1028 23:22:29.041326 2345 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 28 23:22:29.041451 kubelet[2345]: E1028 23:22:29.041434 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:29.043161 kubelet[2345]: E1028 23:22:29.041517 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:29.061083 kubelet[2345]: I1028 23:22:29.061030 2345 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:22:29.061083 kubelet[2345]: E1028 23:22:29.061069 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 28 23:22:29.106264 kubelet[2345]: I1028 23:22:29.106229 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:29.106355 kubelet[2345]: I1028 23:22:29.106283 2345 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 23:22:29.112612 kubelet[2345]: E1028 23:22:29.112513 2345 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1872cb2108ff4ab5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 23:22:27.000593077 +0000 UTC m=+0.828332458,LastTimestamp:2025-10-28 23:22:27.000593077 +0000 UTC m=+0.828332458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 23:22:29.114504 kubelet[2345]: E1028 23:22:29.114473 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:29.114504 kubelet[2345]: I1028 23:22:29.114504 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:29.116265 kubelet[2345]: E1028 23:22:29.116239 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:29.116265 kubelet[2345]: I1028 23:22:29.116263 2345 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:29.118050 kubelet[2345]: E1028 23:22:29.118022 2345 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:29.171299 kubelet[2345]: E1028 23:22:29.171185 2345 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1872cb21094735af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-28 23:22:27.005306287 +0000 UTC m=+0.833045668,LastTimestamp:2025-10-28 23:22:27.005306287 +0000 UTC m=+0.833045668,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 28 23:22:31.108323 systemd[1]: Reload requested from client PID 2632 ('systemctl') (unit session-8.scope)... Oct 28 23:22:31.108338 systemd[1]: Reloading... Oct 28 23:22:31.177152 zram_generator::config[2676]: No configuration found. Oct 28 23:22:31.439663 systemd[1]: Reloading finished in 331 ms. Oct 28 23:22:31.467724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:31.487995 systemd[1]: kubelet.service: Deactivated successfully. Oct 28 23:22:31.488330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:31.488391 systemd[1]: kubelet.service: Consumed 1.198s CPU time, 129.9M memory peak. Oct 28 23:22:31.490113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 28 23:22:31.651626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 28 23:22:31.657419 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 28 23:22:31.711176 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:22:31.711176 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 28 23:22:31.711176 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 28 23:22:31.711176 kubelet[2718]: I1028 23:22:31.711112 2718 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 28 23:22:31.719568 kubelet[2718]: I1028 23:22:31.719531 2718 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 28 23:22:31.719568 kubelet[2718]: I1028 23:22:31.719561 2718 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 28 23:22:31.719782 kubelet[2718]: I1028 23:22:31.719766 2718 server.go:956] "Client rotation is on, will bootstrap in background" Oct 28 23:22:31.721206 kubelet[2718]: I1028 23:22:31.721102 2718 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 28 23:22:31.724703 kubelet[2718]: I1028 23:22:31.724637 2718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 28 23:22:31.728276 kubelet[2718]: I1028 23:22:31.728259 2718 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 28 23:22:31.731834 kubelet[2718]: I1028 23:22:31.731358 2718 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 28 23:22:31.731834 kubelet[2718]: I1028 23:22:31.731596 2718 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 28 23:22:31.731987 kubelet[2718]: I1028 23:22:31.731622 2718 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 28 23:22:31.732071 kubelet[2718]: I1028 23:22:31.731996 2718 topology_manager.go:138] "Creating topology manager with none policy" Oct 28 23:22:31.732071 kubelet[2718]: I1028 23:22:31.732008 2718 container_manager_linux.go:303] "Creating device plugin manager" Oct 28 23:22:31.732071 kubelet[2718]: I1028 23:22:31.732057 2718 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:22:31.732259 kubelet[2718]: I1028 23:22:31.732242 2718 kubelet.go:480] "Attempting to sync node with API server" Oct 28 23:22:31.732292 kubelet[2718]: I1028 23:22:31.732263 2718 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 28 23:22:31.732326 kubelet[2718]: I1028 23:22:31.732300 2718 kubelet.go:386] "Adding apiserver pod source" Oct 28 23:22:31.732326 kubelet[2718]: I1028 23:22:31.732313 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 28 23:22:31.735253 kubelet[2718]: I1028 23:22:31.735228 2718 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 28 23:22:31.735945 kubelet[2718]: I1028 23:22:31.735920 2718 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 28 23:22:31.741416 kubelet[2718]: I1028 23:22:31.741383 2718 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 28 23:22:31.741486 kubelet[2718]: I1028 23:22:31.741439 2718 server.go:1289] "Started kubelet" Oct 28 23:22:31.742698 kubelet[2718]: I1028 23:22:31.742647 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 28 23:22:31.742916 kubelet[2718]: I1028 23:22:31.742896 2718 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 28 23:22:31.742965 kubelet[2718]: I1028 23:22:31.742945 2718 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 28 23:22:31.743753 kubelet[2718]: I1028 23:22:31.743732 2718 server.go:317] "Adding debug handlers to kubelet server" Oct 28 23:22:31.747472 kubelet[2718]: I1028 23:22:31.747432 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 28 23:22:31.748504 kubelet[2718]: I1028 23:22:31.748477 2718 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 28 23:22:31.753391 kubelet[2718]: I1028 23:22:31.753105 2718 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 28 23:22:31.753391 kubelet[2718]: I1028 23:22:31.753300 2718 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 28 23:22:31.753516 kubelet[2718]: I1028 23:22:31.753412 2718 reconciler.go:26] "Reconciler: start to sync state" Oct 28 23:22:31.755017 kubelet[2718]: I1028 23:22:31.754936 2718 factory.go:223] Registration of the systemd container factory successfully Oct 28 23:22:31.755083 kubelet[2718]: I1028 23:22:31.755038 2718 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 28 23:22:31.756360 kubelet[2718]: E1028 23:22:31.756319 2718 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 28 23:22:31.758084 kubelet[2718]: I1028 23:22:31.758061 2718 factory.go:223] Registration of the containerd container factory successfully Oct 28 23:22:31.767640 kubelet[2718]: I1028 23:22:31.767458 2718 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 28 23:22:31.770011 kubelet[2718]: I1028 23:22:31.769980 2718 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 28 23:22:31.770195 kubelet[2718]: I1028 23:22:31.770151 2718 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 28 23:22:31.771032 kubelet[2718]: I1028 23:22:31.770183 2718 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 28 23:22:31.771195 kubelet[2718]: I1028 23:22:31.771164 2718 kubelet.go:2436] "Starting kubelet main sync loop" Oct 28 23:22:31.771682 kubelet[2718]: E1028 23:22:31.771630 2718 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 28 23:22:31.793760 kubelet[2718]: I1028 23:22:31.793722 2718 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 28 23:22:31.793760 kubelet[2718]: I1028 23:22:31.793753 2718 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 28 23:22:31.793890 kubelet[2718]: I1028 23:22:31.793777 2718 state_mem.go:36] "Initialized new in-memory state store" Oct 28 23:22:31.793931 kubelet[2718]: I1028 23:22:31.793911 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 28 23:22:31.793960 kubelet[2718]: I1028 23:22:31.793927 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 28 23:22:31.793960 kubelet[2718]: I1028 23:22:31.793946 2718 policy_none.go:49] "None policy: Start" Oct 28 23:22:31.793960 kubelet[2718]: I1028 23:22:31.793957 2718 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 28 23:22:31.794034 kubelet[2718]: I1028 23:22:31.793966 2718 state_mem.go:35] "Initializing new in-memory state store" Oct 28 23:22:31.794054 kubelet[2718]: I1028 23:22:31.794045 2718 state_mem.go:75] "Updated machine memory state" Oct 28 23:22:31.797641 kubelet[2718]: E1028 23:22:31.797608 2718 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 28 23:22:31.797802 kubelet[2718]: I1028 23:22:31.797783 2718 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 28 23:22:31.797832 kubelet[2718]: I1028 23:22:31.797800 2718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 28 23:22:31.798466 kubelet[2718]: I1028 23:22:31.798328 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 28 23:22:31.799156 kubelet[2718]: E1028 23:22:31.799112 2718 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 28 23:22:31.873036 kubelet[2718]: I1028 23:22:31.873000 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:31.873182 kubelet[2718]: I1028 23:22:31.873160 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:31.873250 kubelet[2718]: I1028 23:22:31.873230 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.903785 kubelet[2718]: I1028 23:22:31.903743 2718 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 28 23:22:31.911140 kubelet[2718]: I1028 23:22:31.911091 2718 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 28 23:22:31.911257 kubelet[2718]: I1028 23:22:31.911187 2718 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 28 23:22:31.954552 kubelet[2718]: I1028 23:22:31.954509 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:31.954552 kubelet[2718]: I1028 23:22:31.954561 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.954712 kubelet[2718]: I1028 23:22:31.954586 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.954712 kubelet[2718]: I1028 23:22:31.954614 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.954712 kubelet[2718]: I1028 23:22:31.954629 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:31.954712 kubelet[2718]: I1028 23:22:31.954643 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c3630a5f479de3d13ae4352411a925-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"75c3630a5f479de3d13ae4352411a925\") " pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:31.954712 kubelet[2718]: I1028 23:22:31.954658 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.954821 kubelet[2718]: I1028 23:22:31.954673 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:31.954821 kubelet[2718]: I1028 23:22:31.954692 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:32.179024 kubelet[2718]: E1028 23:22:32.178967 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.179340 kubelet[2718]: E1028 23:22:32.179314 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.179456 kubelet[2718]: E1028 23:22:32.179441 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.733454 kubelet[2718]: I1028 23:22:32.733404 2718 apiserver.go:52] "Watching apiserver" Oct 28 23:22:32.754253 kubelet[2718]: I1028 23:22:32.754204 2718 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 28 23:22:32.782084 kubelet[2718]: I1028 23:22:32.782043 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:32.782210 kubelet[2718]: I1028 23:22:32.782093 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:32.782418 kubelet[2718]: I1028 23:22:32.782405 2718 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:32.789087 kubelet[2718]: E1028 23:22:32.788934 2718 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 28 23:22:32.789571 kubelet[2718]: E1028 23:22:32.789196 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.789571 kubelet[2718]: E1028 23:22:32.789387 2718 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 28 23:22:32.789571 kubelet[2718]: E1028 23:22:32.789509 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.790294 kubelet[2718]: E1028 23:22:32.790261 2718 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 28 23:22:32.790421 kubelet[2718]: E1028 23:22:32.790401 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:32.799403 kubelet[2718]: I1028 23:22:32.799253 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7992389439999998 podStartE2EDuration="1.799238944s" podCreationTimestamp="2025-10-28 23:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:22:32.798606352 +0000 UTC m=+1.137795885" watchObservedRunningTime="2025-10-28 23:22:32.799238944 +0000 UTC m=+1.138428477" Oct 28 23:22:32.806189 kubelet[2718]: I1028 23:22:32.805435 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.805423855 podStartE2EDuration="1.805423855s" podCreationTimestamp="2025-10-28 23:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:22:32.805358775 +0000 UTC m=+1.144548268" watchObservedRunningTime="2025-10-28 23:22:32.805423855 +0000 UTC m=+1.144613348" Oct 28 23:22:32.812820 kubelet[2718]: I1028 23:22:32.812781 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8127707659999999 podStartE2EDuration="1.812770766s" podCreationTimestamp="2025-10-28 23:22:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:22:32.812627278 +0000 UTC m=+1.151816811" watchObservedRunningTime="2025-10-28 23:22:32.812770766 +0000 UTC m=+1.151960299" Oct 28 23:22:33.783490 kubelet[2718]: E1028 23:22:33.783447 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:33.783823 kubelet[2718]: E1028 23:22:33.783517 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:33.784178 kubelet[2718]: E1028 23:22:33.784157 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:36.015486 kubelet[2718]: I1028 23:22:36.015453 2718 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 28 23:22:36.015846 containerd[1565]: time="2025-10-28T23:22:36.015747779Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 28 23:22:36.016209 kubelet[2718]: I1028 23:22:36.015905 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 28 23:22:36.579325 kubelet[2718]: E1028 23:22:36.579236 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:36.804004 systemd[1]: Created slice kubepods-besteffort-pod32192b48_0c30_4428_b86c_e71e920709b9.slice - libcontainer container kubepods-besteffort-pod32192b48_0c30_4428_b86c_e71e920709b9.slice. Oct 28 23:22:36.885372 kubelet[2718]: I1028 23:22:36.885038 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/32192b48-0c30-4428-b86c-e71e920709b9-kube-proxy\") pod \"kube-proxy-q28p4\" (UID: \"32192b48-0c30-4428-b86c-e71e920709b9\") " pod="kube-system/kube-proxy-q28p4" Oct 28 23:22:36.885372 kubelet[2718]: I1028 23:22:36.885085 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32192b48-0c30-4428-b86c-e71e920709b9-lib-modules\") pod \"kube-proxy-q28p4\" (UID: \"32192b48-0c30-4428-b86c-e71e920709b9\") " pod="kube-system/kube-proxy-q28p4" Oct 28 23:22:36.885372 kubelet[2718]: I1028 23:22:36.885104 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjld9\" (UniqueName: \"kubernetes.io/projected/32192b48-0c30-4428-b86c-e71e920709b9-kube-api-access-pjld9\") pod \"kube-proxy-q28p4\" (UID: \"32192b48-0c30-4428-b86c-e71e920709b9\") " pod="kube-system/kube-proxy-q28p4" Oct 28 23:22:36.885942 kubelet[2718]: I1028 23:22:36.885629 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32192b48-0c30-4428-b86c-e71e920709b9-xtables-lock\") pod \"kube-proxy-q28p4\" (UID: \"32192b48-0c30-4428-b86c-e71e920709b9\") " pod="kube-system/kube-proxy-q28p4" Oct 28 23:22:37.116011 kubelet[2718]: E1028 23:22:37.115473 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:37.116600 containerd[1565]: time="2025-10-28T23:22:37.116566092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q28p4,Uid:32192b48-0c30-4428-b86c-e71e920709b9,Namespace:kube-system,Attempt:0,}" Oct 28 23:22:37.145137 containerd[1565]: time="2025-10-28T23:22:37.145005358Z" level=info msg="connecting to shim bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3" address="unix:///run/containerd/s/4e17b6df9e15830e204ded9d2d201896fbfc02e2a991f25ecb014ab52d1c2b69" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:37.185975 systemd[1]: Created slice kubepods-besteffort-pod152579fe_24d4_41a0_9e29_cd57a3685d9c.slice - libcontainer container kubepods-besteffort-pod152579fe_24d4_41a0_9e29_cd57a3685d9c.slice. Oct 28 23:22:37.188755 kubelet[2718]: I1028 23:22:37.188681 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/152579fe-24d4-41a0-9e29-cd57a3685d9c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-59kqr\" (UID: \"152579fe-24d4-41a0-9e29-cd57a3685d9c\") " pod="tigera-operator/tigera-operator-7dcd859c48-59kqr" Oct 28 23:22:37.188755 kubelet[2718]: I1028 23:22:37.188746 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfwk\" (UniqueName: \"kubernetes.io/projected/152579fe-24d4-41a0-9e29-cd57a3685d9c-kube-api-access-cnfwk\") pod \"tigera-operator-7dcd859c48-59kqr\" (UID: \"152579fe-24d4-41a0-9e29-cd57a3685d9c\") " pod="tigera-operator/tigera-operator-7dcd859c48-59kqr" Oct 28 23:22:37.202351 systemd[1]: Started cri-containerd-bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3.scope - libcontainer container bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3. Oct 28 23:22:37.225790 containerd[1565]: time="2025-10-28T23:22:37.225730917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q28p4,Uid:32192b48-0c30-4428-b86c-e71e920709b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3\"" Oct 28 23:22:37.226482 kubelet[2718]: E1028 23:22:37.226459 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:37.230113 containerd[1565]: time="2025-10-28T23:22:37.230068934Z" level=info msg="CreateContainer within sandbox \"bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 28 23:22:37.240078 containerd[1565]: time="2025-10-28T23:22:37.240035084Z" level=info msg="Container 65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:22:37.250223 containerd[1565]: time="2025-10-28T23:22:37.250165539Z" level=info msg="CreateContainer within sandbox \"bc86cf6bb70e5e31117fe30a7329a77eef15f3441ecffb4f21ee5aa66cf14ad3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa\"" Oct 28 23:22:37.252179 containerd[1565]: time="2025-10-28T23:22:37.251672023Z" level=info msg="StartContainer for \"65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa\"" Oct 28 23:22:37.253827 containerd[1565]: time="2025-10-28T23:22:37.253799794Z" level=info msg="connecting to shim 65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa" address="unix:///run/containerd/s/4e17b6df9e15830e204ded9d2d201896fbfc02e2a991f25ecb014ab52d1c2b69" protocol=ttrpc version=3 Oct 28 23:22:37.273288 systemd[1]: Started cri-containerd-65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa.scope - libcontainer container 65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa. Oct 28 23:22:37.310285 containerd[1565]: time="2025-10-28T23:22:37.309540390Z" level=info msg="StartContainer for \"65ce32bff4e8404f2539d3f4bd9d348fcb07b24d905e7955a87b193a0ee76eaa\" returns successfully" Oct 28 23:22:37.491672 containerd[1565]: time="2025-10-28T23:22:37.491565345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-59kqr,Uid:152579fe-24d4-41a0-9e29-cd57a3685d9c,Namespace:tigera-operator,Attempt:0,}" Oct 28 23:22:37.660663 containerd[1565]: time="2025-10-28T23:22:37.660605741Z" level=info msg="connecting to shim 400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35" address="unix:///run/containerd/s/27f6d08c6b0009a9186a25b68a89658f76d445043cd90253ac63857d87e51a0b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:37.686545 systemd[1]: Started cri-containerd-400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35.scope - libcontainer container 400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35. Oct 28 23:22:37.719455 containerd[1565]: time="2025-10-28T23:22:37.719390916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-59kqr,Uid:152579fe-24d4-41a0-9e29-cd57a3685d9c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35\"" Oct 28 23:22:37.721719 containerd[1565]: time="2025-10-28T23:22:37.721252461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 28 23:22:37.792864 kubelet[2718]: E1028 23:22:37.792824 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:37.802498 kubelet[2718]: I1028 23:22:37.802419 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q28p4" podStartSLOduration=1.802405831 podStartE2EDuration="1.802405831s" podCreationTimestamp="2025-10-28 23:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:22:37.802185103 +0000 UTC m=+6.141374636" watchObservedRunningTime="2025-10-28 23:22:37.802405831 +0000 UTC m=+6.141595364" Oct 28 23:22:38.138209 kubelet[2718]: E1028 23:22:38.138027 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:38.794603 kubelet[2718]: E1028 23:22:38.794563 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:40.508337 kubelet[2718]: E1028 23:22:40.508307 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:40.810354 kubelet[2718]: E1028 23:22:40.809174 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:40.975433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240524412.mount: Deactivated successfully. Oct 28 23:22:43.342029 containerd[1565]: time="2025-10-28T23:22:43.341971439Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:43.342551 containerd[1565]: time="2025-10-28T23:22:43.342527198Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 28 23:22:43.343946 containerd[1565]: time="2025-10-28T23:22:43.343907194Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:43.345742 containerd[1565]: time="2025-10-28T23:22:43.345702950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:22:43.346357 containerd[1565]: time="2025-10-28T23:22:43.346332330Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 5.625046337s" Oct 28 23:22:43.346357 containerd[1565]: time="2025-10-28T23:22:43.346357818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 28 23:22:43.357680 containerd[1565]: time="2025-10-28T23:22:43.357638855Z" level=info msg="CreateContainer within sandbox \"400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 28 23:22:43.363675 containerd[1565]: time="2025-10-28T23:22:43.363645539Z" level=info msg="Container 3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:22:43.365905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012402699.mount: Deactivated successfully. Oct 28 23:22:43.373302 containerd[1565]: time="2025-10-28T23:22:43.373250175Z" level=info msg="CreateContainer within sandbox \"400bb2868c50572484ddd4b53076dfe474b49243f7336d357ba24a46ce251c35\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40\"" Oct 28 23:22:43.374068 containerd[1565]: time="2025-10-28T23:22:43.373984186Z" level=info msg="StartContainer for \"3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40\"" Oct 28 23:22:43.374939 containerd[1565]: time="2025-10-28T23:22:43.374768251Z" level=info msg="connecting to shim 3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40" address="unix:///run/containerd/s/27f6d08c6b0009a9186a25b68a89658f76d445043cd90253ac63857d87e51a0b" protocol=ttrpc version=3 Oct 28 23:22:43.417286 systemd[1]: Started cri-containerd-3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40.scope - libcontainer container 3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40. Oct 28 23:22:43.444374 containerd[1565]: time="2025-10-28T23:22:43.444340816Z" level=info msg="StartContainer for \"3c7367f2ceeea3e24a4be814fff653308061eb52af723d14d5d928e2bc01fc40\" returns successfully" Oct 28 23:22:43.827294 kubelet[2718]: I1028 23:22:43.827216 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-59kqr" podStartSLOduration=1.193851366 podStartE2EDuration="6.827200204s" podCreationTimestamp="2025-10-28 23:22:37 +0000 UTC" firstStartedPulling="2025-10-28 23:22:37.720943737 +0000 UTC m=+6.060133230" lastFinishedPulling="2025-10-28 23:22:43.354292535 +0000 UTC m=+11.693482068" observedRunningTime="2025-10-28 23:22:43.82690484 +0000 UTC m=+12.166094373" watchObservedRunningTime="2025-10-28 23:22:43.827200204 +0000 UTC m=+12.166389737" Oct 28 23:22:46.588160 kubelet[2718]: E1028 23:22:46.586732 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:48.563539 sudo[1782]: pam_unix(sudo:session): session closed for user root Oct 28 23:22:48.566168 sshd[1781]: Connection closed by 10.0.0.1 port 46072 Oct 28 23:22:48.568295 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Oct 28 23:22:48.572055 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:46072.service: Deactivated successfully. Oct 28 23:22:48.574049 systemd[1]: session-8.scope: Deactivated successfully. Oct 28 23:22:48.576265 systemd[1]: session-8.scope: Consumed 7.098s CPU time, 208.7M memory peak. Oct 28 23:22:48.578300 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Oct 28 23:22:48.579768 systemd-logind[1542]: Removed session 8. Oct 28 23:22:48.857511 update_engine[1543]: I20251028 23:22:48.857183 1543 update_attempter.cc:509] Updating boot flags... Oct 28 23:22:58.350797 systemd[1]: Created slice kubepods-besteffort-podbaf3e61e_cbd8_4829_9f7a_7108dd4478ef.slice - libcontainer container kubepods-besteffort-podbaf3e61e_cbd8_4829_9f7a_7108dd4478ef.slice. Oct 28 23:22:58.438640 kubelet[2718]: I1028 23:22:58.438576 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/baf3e61e-cbd8-4829-9f7a-7108dd4478ef-typha-certs\") pod \"calico-typha-64db58bf9b-dx925\" (UID: \"baf3e61e-cbd8-4829-9f7a-7108dd4478ef\") " pod="calico-system/calico-typha-64db58bf9b-dx925" Oct 28 23:22:58.438640 kubelet[2718]: I1028 23:22:58.438627 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84db\" (UniqueName: \"kubernetes.io/projected/baf3e61e-cbd8-4829-9f7a-7108dd4478ef-kube-api-access-n84db\") pod \"calico-typha-64db58bf9b-dx925\" (UID: \"baf3e61e-cbd8-4829-9f7a-7108dd4478ef\") " pod="calico-system/calico-typha-64db58bf9b-dx925" Oct 28 23:22:58.438640 kubelet[2718]: I1028 23:22:58.438649 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baf3e61e-cbd8-4829-9f7a-7108dd4478ef-tigera-ca-bundle\") pod \"calico-typha-64db58bf9b-dx925\" (UID: \"baf3e61e-cbd8-4829-9f7a-7108dd4478ef\") " pod="calico-system/calico-typha-64db58bf9b-dx925" Oct 28 23:22:58.510100 systemd[1]: Created slice kubepods-besteffort-pode801e6e3_44fc_43ae_99a1_96cc2e475b1d.slice - libcontainer container kubepods-besteffort-pode801e6e3_44fc_43ae_99a1_96cc2e475b1d.slice. Oct 28 23:22:58.539368 kubelet[2718]: I1028 23:22:58.539314 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-flexvol-driver-host\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539368 kubelet[2718]: I1028 23:22:58.539359 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-cni-log-dir\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539368 kubelet[2718]: I1028 23:22:58.539378 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-var-lib-calico\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539555 kubelet[2718]: I1028 23:22:58.539392 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-xtables-lock\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539555 kubelet[2718]: I1028 23:22:58.539419 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-tigera-ca-bundle\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539555 kubelet[2718]: I1028 23:22:58.539432 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-policysync\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539555 kubelet[2718]: I1028 23:22:58.539462 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-var-run-calico\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539555 kubelet[2718]: I1028 23:22:58.539488 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-cni-net-dir\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539662 kubelet[2718]: I1028 23:22:58.539503 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-lib-modules\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539662 kubelet[2718]: I1028 23:22:58.539519 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzcz\" (UniqueName: \"kubernetes.io/projected/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-kube-api-access-lrzcz\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539662 kubelet[2718]: I1028 23:22:58.539534 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-cni-bin-dir\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.539662 kubelet[2718]: I1028 23:22:58.539547 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e801e6e3-44fc-43ae-99a1-96cc2e475b1d-node-certs\") pod \"calico-node-67t2v\" (UID: \"e801e6e3-44fc-43ae-99a1-96cc2e475b1d\") " pod="calico-system/calico-node-67t2v" Oct 28 23:22:58.647495 kubelet[2718]: E1028 23:22:58.646737 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.647495 kubelet[2718]: W1028 23:22:58.646761 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.647495 kubelet[2718]: E1028 23:22:58.647027 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.652553 kubelet[2718]: E1028 23:22:58.652529 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.652553 kubelet[2718]: W1028 23:22:58.652549 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.652671 kubelet[2718]: E1028 23:22:58.652567 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.655228 kubelet[2718]: E1028 23:22:58.655040 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:58.655583 containerd[1565]: time="2025-10-28T23:22:58.655541839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64db58bf9b-dx925,Uid:baf3e61e-cbd8-4829-9f7a-7108dd4478ef,Namespace:calico-system,Attempt:0,}" Oct 28 23:22:58.702629 kubelet[2718]: E1028 23:22:58.702583 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:22:58.716942 containerd[1565]: time="2025-10-28T23:22:58.716891399Z" level=info msg="connecting to shim 916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded" address="unix:///run/containerd/s/f1e6dd8905fb62650a9b5c69c740b528b23e4ff07ddfc863ef436e07900657ee" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:58.732008 kubelet[2718]: E1028 23:22:58.731961 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732008 kubelet[2718]: W1028 23:22:58.731987 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732008 kubelet[2718]: E1028 23:22:58.732008 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.732190 kubelet[2718]: E1028 23:22:58.732150 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732225 kubelet[2718]: W1028 23:22:58.732159 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732268 kubelet[2718]: E1028 23:22:58.732223 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.732450 kubelet[2718]: E1028 23:22:58.732419 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732450 kubelet[2718]: W1028 23:22:58.732433 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732499 kubelet[2718]: E1028 23:22:58.732453 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.732632 kubelet[2718]: E1028 23:22:58.732609 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732632 kubelet[2718]: W1028 23:22:58.732622 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732632 kubelet[2718]: E1028 23:22:58.732632 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.732798 kubelet[2718]: E1028 23:22:58.732786 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732798 kubelet[2718]: W1028 23:22:58.732797 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732848 kubelet[2718]: E1028 23:22:58.732805 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.732950 kubelet[2718]: E1028 23:22:58.732938 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.732950 kubelet[2718]: W1028 23:22:58.732948 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.732991 kubelet[2718]: E1028 23:22:58.732957 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.733101 kubelet[2718]: E1028 23:22:58.733090 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.733101 kubelet[2718]: W1028 23:22:58.733100 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.733166 kubelet[2718]: E1028 23:22:58.733109 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.733284 kubelet[2718]: E1028 23:22:58.733270 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.733284 kubelet[2718]: W1028 23:22:58.733282 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.733334 kubelet[2718]: E1028 23:22:58.733291 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.733451 kubelet[2718]: E1028 23:22:58.733440 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.733476 kubelet[2718]: W1028 23:22:58.733451 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.733476 kubelet[2718]: E1028 23:22:58.733469 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.733699 kubelet[2718]: E1028 23:22:58.733654 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.733699 kubelet[2718]: W1028 23:22:58.733666 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.733699 kubelet[2718]: E1028 23:22:58.733676 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734018 kubelet[2718]: E1028 23:22:58.734000 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734018 kubelet[2718]: W1028 23:22:58.734017 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734070 kubelet[2718]: E1028 23:22:58.734028 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734233 kubelet[2718]: E1028 23:22:58.734211 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734233 kubelet[2718]: W1028 23:22:58.734231 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734291 kubelet[2718]: E1028 23:22:58.734241 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734427 kubelet[2718]: E1028 23:22:58.734414 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734427 kubelet[2718]: W1028 23:22:58.734425 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734478 kubelet[2718]: E1028 23:22:58.734434 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734591 kubelet[2718]: E1028 23:22:58.734572 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734591 kubelet[2718]: W1028 23:22:58.734590 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734638 kubelet[2718]: E1028 23:22:58.734598 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734733 kubelet[2718]: E1028 23:22:58.734722 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734759 kubelet[2718]: W1028 23:22:58.734732 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734759 kubelet[2718]: E1028 23:22:58.734749 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.734886 kubelet[2718]: E1028 23:22:58.734874 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.734886 kubelet[2718]: W1028 23:22:58.734885 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.734936 kubelet[2718]: E1028 23:22:58.734892 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.735047 kubelet[2718]: E1028 23:22:58.735034 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.735047 kubelet[2718]: W1028 23:22:58.735045 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.735104 kubelet[2718]: E1028 23:22:58.735054 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.735220 kubelet[2718]: E1028 23:22:58.735202 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.735246 kubelet[2718]: W1028 23:22:58.735212 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.735246 kubelet[2718]: E1028 23:22:58.735229 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.735382 kubelet[2718]: E1028 23:22:58.735370 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.735382 kubelet[2718]: W1028 23:22:58.735380 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.735433 kubelet[2718]: E1028 23:22:58.735389 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.735542 kubelet[2718]: E1028 23:22:58.735531 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.735565 kubelet[2718]: W1028 23:22:58.735541 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.735565 kubelet[2718]: E1028 23:22:58.735549 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.742011 kubelet[2718]: E1028 23:22:58.741988 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.742011 kubelet[2718]: W1028 23:22:58.742007 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.742146 kubelet[2718]: E1028 23:22:58.742022 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.742146 kubelet[2718]: I1028 23:22:58.742053 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/74371a3d-9f28-4cd9-84c2-dcf19d44a64f-varrun\") pod \"csi-node-driver-hnlxb\" (UID: \"74371a3d-9f28-4cd9-84c2-dcf19d44a64f\") " pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:22:58.743654 kubelet[2718]: E1028 23:22:58.743634 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.743654 kubelet[2718]: W1028 23:22:58.743652 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.743731 kubelet[2718]: E1028 23:22:58.743665 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.743731 kubelet[2718]: I1028 23:22:58.743691 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74371a3d-9f28-4cd9-84c2-dcf19d44a64f-kubelet-dir\") pod \"csi-node-driver-hnlxb\" (UID: \"74371a3d-9f28-4cd9-84c2-dcf19d44a64f\") " pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:22:58.743874 kubelet[2718]: E1028 23:22:58.743856 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.743908 kubelet[2718]: W1028 23:22:58.743873 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.743908 kubelet[2718]: E1028 23:22:58.743884 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.744063 kubelet[2718]: E1028 23:22:58.744051 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.744086 kubelet[2718]: W1028 23:22:58.744063 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.744086 kubelet[2718]: E1028 23:22:58.744073 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.744267 kubelet[2718]: E1028 23:22:58.744251 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.744267 kubelet[2718]: W1028 23:22:58.744264 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.744316 kubelet[2718]: I1028 23:22:58.744265 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/74371a3d-9f28-4cd9-84c2-dcf19d44a64f-socket-dir\") pod \"csi-node-driver-hnlxb\" (UID: \"74371a3d-9f28-4cd9-84c2-dcf19d44a64f\") " pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:22:58.744316 kubelet[2718]: E1028 23:22:58.744273 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.744493 kubelet[2718]: E1028 23:22:58.744477 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.744521 kubelet[2718]: W1028 23:22:58.744493 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.744521 kubelet[2718]: E1028 23:22:58.744505 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.744852 kubelet[2718]: E1028 23:22:58.744772 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.744852 kubelet[2718]: W1028 23:22:58.744837 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.744852 kubelet[2718]: E1028 23:22:58.744849 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.744977 kubelet[2718]: I1028 23:22:58.744879 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pc66\" (UniqueName: \"kubernetes.io/projected/74371a3d-9f28-4cd9-84c2-dcf19d44a64f-kube-api-access-9pc66\") pod \"csi-node-driver-hnlxb\" (UID: \"74371a3d-9f28-4cd9-84c2-dcf19d44a64f\") " pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:22:58.745166 kubelet[2718]: E1028 23:22:58.745148 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.745198 kubelet[2718]: W1028 23:22:58.745165 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.745198 kubelet[2718]: E1028 23:22:58.745177 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.745447 kubelet[2718]: E1028 23:22:58.745430 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.745447 kubelet[2718]: W1028 23:22:58.745445 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.745504 kubelet[2718]: E1028 23:22:58.745456 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.745663 kubelet[2718]: E1028 23:22:58.745648 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.745687 kubelet[2718]: W1028 23:22:58.745662 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.745687 kubelet[2718]: E1028 23:22:58.745672 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.745826 kubelet[2718]: E1028 23:22:58.745814 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.745850 kubelet[2718]: W1028 23:22:58.745825 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.745850 kubelet[2718]: E1028 23:22:58.745836 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.746056 kubelet[2718]: E1028 23:22:58.746040 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.746081 kubelet[2718]: W1028 23:22:58.746057 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.746081 kubelet[2718]: E1028 23:22:58.746068 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.746328 kubelet[2718]: E1028 23:22:58.746314 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.746359 kubelet[2718]: W1028 23:22:58.746333 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.746387 systemd[1]: Started cri-containerd-916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded.scope - libcontainer container 916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded. Oct 28 23:22:58.746460 kubelet[2718]: E1028 23:22:58.746443 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.746485 kubelet[2718]: I1028 23:22:58.746475 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/74371a3d-9f28-4cd9-84c2-dcf19d44a64f-registration-dir\") pod \"csi-node-driver-hnlxb\" (UID: \"74371a3d-9f28-4cd9-84c2-dcf19d44a64f\") " pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:22:58.747205 kubelet[2718]: E1028 23:22:58.747185 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.747242 kubelet[2718]: W1028 23:22:58.747205 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.747242 kubelet[2718]: E1028 23:22:58.747226 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.747670 kubelet[2718]: E1028 23:22:58.747655 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.747703 kubelet[2718]: W1028 23:22:58.747670 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.747703 kubelet[2718]: E1028 23:22:58.747682 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.815730 kubelet[2718]: E1028 23:22:58.815678 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:58.817190 containerd[1565]: time="2025-10-28T23:22:58.817150698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-67t2v,Uid:e801e6e3-44fc-43ae-99a1-96cc2e475b1d,Namespace:calico-system,Attempt:0,}" Oct 28 23:22:58.834254 containerd[1565]: time="2025-10-28T23:22:58.834180652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64db58bf9b-dx925,Uid:baf3e61e-cbd8-4829-9f7a-7108dd4478ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded\"" Oct 28 23:22:58.836718 kubelet[2718]: E1028 23:22:58.836563 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:58.839083 containerd[1565]: time="2025-10-28T23:22:58.839047845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 28 23:22:58.847368 kubelet[2718]: E1028 23:22:58.847339 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.847368 kubelet[2718]: W1028 23:22:58.847362 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.847471 kubelet[2718]: E1028 23:22:58.847382 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.847601 kubelet[2718]: E1028 23:22:58.847587 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.847651 kubelet[2718]: W1028 23:22:58.847600 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.847651 kubelet[2718]: E1028 23:22:58.847610 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.848236 kubelet[2718]: E1028 23:22:58.848207 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.848278 kubelet[2718]: W1028 23:22:58.848228 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.848278 kubelet[2718]: E1028 23:22:58.848251 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.848469 kubelet[2718]: E1028 23:22:58.848453 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.848469 kubelet[2718]: W1028 23:22:58.848467 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.848526 kubelet[2718]: E1028 23:22:58.848476 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.848688 kubelet[2718]: E1028 23:22:58.848674 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.848713 kubelet[2718]: W1028 23:22:58.848686 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.848713 kubelet[2718]: E1028 23:22:58.848698 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.848959 kubelet[2718]: E1028 23:22:58.848947 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.848988 kubelet[2718]: W1028 23:22:58.848962 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.848988 kubelet[2718]: E1028 23:22:58.848981 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.849871 kubelet[2718]: E1028 23:22:58.849842 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.849871 kubelet[2718]: W1028 23:22:58.849859 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.849871 kubelet[2718]: E1028 23:22:58.849871 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850079 kubelet[2718]: E1028 23:22:58.850066 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.850104 kubelet[2718]: W1028 23:22:58.850079 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.850104 kubelet[2718]: E1028 23:22:58.850089 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850279 kubelet[2718]: E1028 23:22:58.850265 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.850279 kubelet[2718]: W1028 23:22:58.850278 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.850322 kubelet[2718]: E1028 23:22:58.850288 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850458 kubelet[2718]: E1028 23:22:58.850446 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.850458 kubelet[2718]: W1028 23:22:58.850458 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.850505 kubelet[2718]: E1028 23:22:58.850467 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850685 kubelet[2718]: E1028 23:22:58.850672 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.850708 kubelet[2718]: W1028 23:22:58.850685 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.850708 kubelet[2718]: E1028 23:22:58.850695 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850878 kubelet[2718]: E1028 23:22:58.850864 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.850900 kubelet[2718]: W1028 23:22:58.850888 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.850900 kubelet[2718]: E1028 23:22:58.850897 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.850983 containerd[1565]: time="2025-10-28T23:22:58.850613603Z" level=info msg="connecting to shim 293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312" address="unix:///run/containerd/s/448cea6ffc4d0ed1d2d2078a971d2378e4de44921cc6016b2078dbc7a3b98cfd" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:22:58.851548 kubelet[2718]: E1028 23:22:58.851521 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.851548 kubelet[2718]: W1028 23:22:58.851539 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.851594 kubelet[2718]: E1028 23:22:58.851551 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.851801 kubelet[2718]: E1028 23:22:58.851786 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.851834 kubelet[2718]: W1028 23:22:58.851801 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.851834 kubelet[2718]: E1028 23:22:58.851811 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.851965 kubelet[2718]: E1028 23:22:58.851953 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.851990 kubelet[2718]: W1028 23:22:58.851974 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.851990 kubelet[2718]: E1028 23:22:58.851985 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852164 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853177 kubelet[2718]: W1028 23:22:58.852179 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852189 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852401 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853177 kubelet[2718]: W1028 23:22:58.852409 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852418 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852583 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853177 kubelet[2718]: W1028 23:22:58.852591 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852609 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853177 kubelet[2718]: E1028 23:22:58.852775 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853652 kubelet[2718]: W1028 23:22:58.852785 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853652 kubelet[2718]: E1028 23:22:58.852794 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853652 kubelet[2718]: E1028 23:22:58.852976 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853652 kubelet[2718]: W1028 23:22:58.852983 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853652 kubelet[2718]: E1028 23:22:58.852991 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853652 kubelet[2718]: E1028 23:22:58.853203 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.853652 kubelet[2718]: W1028 23:22:58.853212 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.853652 kubelet[2718]: E1028 23:22:58.853220 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.853993 kubelet[2718]: E1028 23:22:58.853878 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.854166 kubelet[2718]: W1028 23:22:58.854071 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.854255 kubelet[2718]: E1028 23:22:58.854240 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.854932 kubelet[2718]: E1028 23:22:58.854797 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.854932 kubelet[2718]: W1028 23:22:58.854820 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.854932 kubelet[2718]: E1028 23:22:58.854837 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.855188 kubelet[2718]: E1028 23:22:58.855173 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.855266 kubelet[2718]: W1028 23:22:58.855252 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.855341 kubelet[2718]: E1028 23:22:58.855328 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.855932 kubelet[2718]: E1028 23:22:58.855913 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.856036 kubelet[2718]: W1028 23:22:58.856020 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.856161 kubelet[2718]: E1028 23:22:58.856147 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.867990 kubelet[2718]: E1028 23:22:58.867952 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:22:58.867990 kubelet[2718]: W1028 23:22:58.867974 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:22:58.867990 kubelet[2718]: E1028 23:22:58.867994 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:22:58.880345 systemd[1]: Started cri-containerd-293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312.scope - libcontainer container 293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312. Oct 28 23:22:58.908792 containerd[1565]: time="2025-10-28T23:22:58.908103390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-67t2v,Uid:e801e6e3-44fc-43ae-99a1-96cc2e475b1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\"" Oct 28 23:22:58.908873 kubelet[2718]: E1028 23:22:58.908810 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:22:59.888987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788936118.mount: Deactivated successfully. Oct 28 23:23:00.775381 kubelet[2718]: E1028 23:23:00.775252 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:02.568981 containerd[1565]: time="2025-10-28T23:23:02.568934532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:02.570164 containerd[1565]: time="2025-10-28T23:23:02.570093268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 28 23:23:02.570770 containerd[1565]: time="2025-10-28T23:23:02.570741825Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:02.573162 containerd[1565]: time="2025-10-28T23:23:02.573116464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:02.574279 containerd[1565]: time="2025-10-28T23:23:02.574248797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 3.735162507s" Oct 28 23:23:02.574319 containerd[1565]: time="2025-10-28T23:23:02.574281160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 28 23:23:02.575212 containerd[1565]: time="2025-10-28T23:23:02.575179586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 28 23:23:02.594058 containerd[1565]: time="2025-10-28T23:23:02.594005798Z" level=info msg="CreateContainer within sandbox \"916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 28 23:23:02.600175 containerd[1565]: time="2025-10-28T23:23:02.599362347Z" level=info msg="Container 786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:02.616517 containerd[1565]: time="2025-10-28T23:23:02.616471758Z" level=info msg="CreateContainer within sandbox \"916da6da42f5951ebfb25eaf288208d5ad65099041f694ad6a3f32745eaa6ded\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1\"" Oct 28 23:23:02.617058 containerd[1565]: time="2025-10-28T23:23:02.617027223Z" level=info msg="StartContainer for \"786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1\"" Oct 28 23:23:02.618858 containerd[1565]: time="2025-10-28T23:23:02.618815473Z" level=info msg="connecting to shim 786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1" address="unix:///run/containerd/s/f1e6dd8905fb62650a9b5c69c740b528b23e4ff07ddfc863ef436e07900657ee" protocol=ttrpc version=3 Oct 28 23:23:02.644319 systemd[1]: Started cri-containerd-786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1.scope - libcontainer container 786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1. Oct 28 23:23:02.678983 containerd[1565]: time="2025-10-28T23:23:02.678863049Z" level=info msg="StartContainer for \"786b3405f9ab1897e800fc4dcfe94f69430731e54ec2adcf253bfd55267a5ff1\" returns successfully" Oct 28 23:23:02.772694 kubelet[2718]: E1028 23:23:02.772334 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:02.889588 kubelet[2718]: E1028 23:23:02.888987 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:02.924153 kubelet[2718]: I1028 23:23:02.923475 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64db58bf9b-dx925" podStartSLOduration=1.185747986 podStartE2EDuration="4.923448548s" podCreationTimestamp="2025-10-28 23:22:58 +0000 UTC" firstStartedPulling="2025-10-28 23:22:58.837317165 +0000 UTC m=+27.176506698" lastFinishedPulling="2025-10-28 23:23:02.575017727 +0000 UTC m=+30.914207260" observedRunningTime="2025-10-28 23:23:02.923055622 +0000 UTC m=+31.262245155" watchObservedRunningTime="2025-10-28 23:23:02.923448548 +0000 UTC m=+31.262638081" Oct 28 23:23:02.959304 kubelet[2718]: E1028 23:23:02.959271 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.959304 kubelet[2718]: W1028 23:23:02.959298 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.959464 kubelet[2718]: E1028 23:23:02.959320 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.959528 kubelet[2718]: E1028 23:23:02.959507 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.959574 kubelet[2718]: W1028 23:23:02.959522 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.959574 kubelet[2718]: E1028 23:23:02.959568 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.959727 kubelet[2718]: E1028 23:23:02.959709 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.959727 kubelet[2718]: W1028 23:23:02.959719 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.959790 kubelet[2718]: E1028 23:23:02.959736 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.959882 kubelet[2718]: E1028 23:23:02.959871 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.959882 kubelet[2718]: W1028 23:23:02.959882 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.959939 kubelet[2718]: E1028 23:23:02.959890 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960037 kubelet[2718]: E1028 23:23:02.960026 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960037 kubelet[2718]: W1028 23:23:02.960037 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960096 kubelet[2718]: E1028 23:23:02.960045 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960216 kubelet[2718]: E1028 23:23:02.960203 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960216 kubelet[2718]: W1028 23:23:02.960215 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960278 kubelet[2718]: E1028 23:23:02.960223 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960350 kubelet[2718]: E1028 23:23:02.960340 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960350 kubelet[2718]: W1028 23:23:02.960350 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960402 kubelet[2718]: E1028 23:23:02.960358 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960491 kubelet[2718]: E1028 23:23:02.960481 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960491 kubelet[2718]: W1028 23:23:02.960490 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960555 kubelet[2718]: E1028 23:23:02.960498 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960636 kubelet[2718]: E1028 23:23:02.960625 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960636 kubelet[2718]: W1028 23:23:02.960636 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960688 kubelet[2718]: E1028 23:23:02.960643 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960776 kubelet[2718]: E1028 23:23:02.960764 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960809 kubelet[2718]: W1028 23:23:02.960778 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960809 kubelet[2718]: E1028 23:23:02.960787 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.960913 kubelet[2718]: E1028 23:23:02.960902 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.960913 kubelet[2718]: W1028 23:23:02.960912 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.960966 kubelet[2718]: E1028 23:23:02.960920 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.961048 kubelet[2718]: E1028 23:23:02.961038 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.961048 kubelet[2718]: W1028 23:23:02.961047 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.961104 kubelet[2718]: E1028 23:23:02.961055 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.961198 kubelet[2718]: E1028 23:23:02.961187 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.961198 kubelet[2718]: W1028 23:23:02.961198 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.961256 kubelet[2718]: E1028 23:23:02.961206 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.961354 kubelet[2718]: E1028 23:23:02.961344 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.961354 kubelet[2718]: W1028 23:23:02.961354 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.961411 kubelet[2718]: E1028 23:23:02.961362 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.961489 kubelet[2718]: E1028 23:23:02.961479 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.961489 kubelet[2718]: W1028 23:23:02.961488 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.961541 kubelet[2718]: E1028 23:23:02.961496 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.978111 kubelet[2718]: E1028 23:23:02.977958 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.978111 kubelet[2718]: W1028 23:23:02.977979 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.978111 kubelet[2718]: E1028 23:23:02.977993 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.978355 kubelet[2718]: E1028 23:23:02.978315 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.978355 kubelet[2718]: W1028 23:23:02.978330 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.978355 kubelet[2718]: E1028 23:23:02.978341 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.978794 kubelet[2718]: E1028 23:23:02.978641 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.978794 kubelet[2718]: W1028 23:23:02.978653 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.978794 kubelet[2718]: E1028 23:23:02.978664 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.978961 kubelet[2718]: E1028 23:23:02.978947 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.979015 kubelet[2718]: W1028 23:23:02.979005 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.979065 kubelet[2718]: E1028 23:23:02.979056 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.979322 kubelet[2718]: E1028 23:23:02.979304 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.979403 kubelet[2718]: W1028 23:23:02.979390 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.979457 kubelet[2718]: E1028 23:23:02.979444 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.979697 kubelet[2718]: E1028 23:23:02.979663 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.979697 kubelet[2718]: W1028 23:23:02.979676 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.979697 kubelet[2718]: E1028 23:23:02.979686 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.980028 kubelet[2718]: E1028 23:23:02.979992 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.980028 kubelet[2718]: W1028 23:23:02.980005 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.980028 kubelet[2718]: E1028 23:23:02.980015 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.980326 kubelet[2718]: E1028 23:23:02.980292 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.980326 kubelet[2718]: W1028 23:23:02.980304 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.980326 kubelet[2718]: E1028 23:23:02.980313 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.980686 kubelet[2718]: E1028 23:23:02.980597 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.980686 kubelet[2718]: W1028 23:23:02.980609 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.980686 kubelet[2718]: E1028 23:23:02.980619 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.981108 kubelet[2718]: E1028 23:23:02.980930 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.981108 kubelet[2718]: W1028 23:23:02.980944 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.981108 kubelet[2718]: E1028 23:23:02.980955 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.981260 kubelet[2718]: E1028 23:23:02.981236 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.981260 kubelet[2718]: W1028 23:23:02.981256 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.981311 kubelet[2718]: E1028 23:23:02.981269 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.981456 kubelet[2718]: E1028 23:23:02.981433 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.981456 kubelet[2718]: W1028 23:23:02.981447 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.981456 kubelet[2718]: E1028 23:23:02.981457 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.981730 kubelet[2718]: E1028 23:23:02.981707 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.981730 kubelet[2718]: W1028 23:23:02.981717 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.981799 kubelet[2718]: E1028 23:23:02.981735 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.981887 kubelet[2718]: E1028 23:23:02.981873 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.981887 kubelet[2718]: W1028 23:23:02.981885 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.981887 kubelet[2718]: E1028 23:23:02.981893 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.982155 kubelet[2718]: E1028 23:23:02.982140 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.982209 kubelet[2718]: W1028 23:23:02.982199 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.982270 kubelet[2718]: E1028 23:23:02.982259 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.982486 kubelet[2718]: E1028 23:23:02.982471 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.982548 kubelet[2718]: W1028 23:23:02.982537 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.982597 kubelet[2718]: E1028 23:23:02.982586 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.983094 kubelet[2718]: E1028 23:23:02.982846 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.983094 kubelet[2718]: W1028 23:23:02.982858 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.983094 kubelet[2718]: E1028 23:23:02.982868 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:02.983270 kubelet[2718]: E1028 23:23:02.983255 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:02.983329 kubelet[2718]: W1028 23:23:02.983318 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:02.983376 kubelet[2718]: E1028 23:23:02.983367 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.891091 kubelet[2718]: E1028 23:23:03.891057 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:03.931171 containerd[1565]: time="2025-10-28T23:23:03.931115814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:03.932076 containerd[1565]: time="2025-10-28T23:23:03.931853178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 28 23:23:03.932924 containerd[1565]: time="2025-10-28T23:23:03.932884254Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:03.934890 containerd[1565]: time="2025-10-28T23:23:03.934850397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:03.935524 containerd[1565]: time="2025-10-28T23:23:03.935496470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.36028508s" Oct 28 23:23:03.935636 containerd[1565]: time="2025-10-28T23:23:03.935617323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 28 23:23:03.938706 containerd[1565]: time="2025-10-28T23:23:03.938670629Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 28 23:23:03.946320 containerd[1565]: time="2025-10-28T23:23:03.946271448Z" level=info msg="Container 3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:03.952104 containerd[1565]: time="2025-10-28T23:23:03.952057703Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\"" Oct 28 23:23:03.952599 containerd[1565]: time="2025-10-28T23:23:03.952581082Z" level=info msg="StartContainer for \"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\"" Oct 28 23:23:03.953870 containerd[1565]: time="2025-10-28T23:23:03.953841825Z" level=info msg="connecting to shim 3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87" address="unix:///run/containerd/s/448cea6ffc4d0ed1d2d2078a971d2378e4de44921cc6016b2078dbc7a3b98cfd" protocol=ttrpc version=3 Oct 28 23:23:03.968372 kubelet[2718]: E1028 23:23:03.968339 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.968372 kubelet[2718]: W1028 23:23:03.968360 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.968372 kubelet[2718]: E1028 23:23:03.968380 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.968984 kubelet[2718]: E1028 23:23:03.968612 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.968984 kubelet[2718]: W1028 23:23:03.968692 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.968984 kubelet[2718]: E1028 23:23:03.968706 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.968984 kubelet[2718]: E1028 23:23:03.968907 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.968984 kubelet[2718]: W1028 23:23:03.968917 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.968984 kubelet[2718]: E1028 23:23:03.968926 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.969244 kubelet[2718]: E1028 23:23:03.969096 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.969244 kubelet[2718]: W1028 23:23:03.969106 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.969244 kubelet[2718]: E1028 23:23:03.969114 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.969340 kubelet[2718]: E1028 23:23:03.969278 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.969340 kubelet[2718]: W1028 23:23:03.969286 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.969340 kubelet[2718]: E1028 23:23:03.969299 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.969441 kubelet[2718]: E1028 23:23:03.969428 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.969441 kubelet[2718]: W1028 23:23:03.969438 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.969496 kubelet[2718]: E1028 23:23:03.969446 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.969630 kubelet[2718]: E1028 23:23:03.969615 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.969630 kubelet[2718]: W1028 23:23:03.969628 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.969675 kubelet[2718]: E1028 23:23:03.969638 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.969842 kubelet[2718]: E1028 23:23:03.969819 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.969842 kubelet[2718]: W1028 23:23:03.969839 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.969913 kubelet[2718]: E1028 23:23:03.969849 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970026 kubelet[2718]: E1028 23:23:03.970007 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970026 kubelet[2718]: W1028 23:23:03.970018 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970026 kubelet[2718]: E1028 23:23:03.970026 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970183 kubelet[2718]: E1028 23:23:03.970165 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970183 kubelet[2718]: W1028 23:23:03.970176 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970239 kubelet[2718]: E1028 23:23:03.970184 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970313 kubelet[2718]: E1028 23:23:03.970302 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970313 kubelet[2718]: W1028 23:23:03.970312 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970372 kubelet[2718]: E1028 23:23:03.970320 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970446 kubelet[2718]: E1028 23:23:03.970435 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970446 kubelet[2718]: W1028 23:23:03.970444 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970540 kubelet[2718]: E1028 23:23:03.970452 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970589 kubelet[2718]: E1028 23:23:03.970576 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970589 kubelet[2718]: W1028 23:23:03.970586 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970708 kubelet[2718]: E1028 23:23:03.970594 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.970775 kubelet[2718]: E1028 23:23:03.970762 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.970775 kubelet[2718]: W1028 23:23:03.970773 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.970848 kubelet[2718]: E1028 23:23:03.970788 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.971019 kubelet[2718]: E1028 23:23:03.971004 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.971019 kubelet[2718]: W1028 23:23:03.971018 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.971075 kubelet[2718]: E1028 23:23:03.971028 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.987096 kubelet[2718]: E1028 23:23:03.987070 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.987096 kubelet[2718]: W1028 23:23:03.987092 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.987248 kubelet[2718]: E1028 23:23:03.987112 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.987400 kubelet[2718]: E1028 23:23:03.987343 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.987400 kubelet[2718]: W1028 23:23:03.987356 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.987400 kubelet[2718]: E1028 23:23:03.987364 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.987572 kubelet[2718]: E1028 23:23:03.987557 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.987597 kubelet[2718]: W1028 23:23:03.987571 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.987597 kubelet[2718]: E1028 23:23:03.987581 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.987822 kubelet[2718]: E1028 23:23:03.987808 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.987822 kubelet[2718]: W1028 23:23:03.987819 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.987886 kubelet[2718]: E1028 23:23:03.987830 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.988011 kubelet[2718]: E1028 23:23:03.987998 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.988011 kubelet[2718]: W1028 23:23:03.988009 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.988063 kubelet[2718]: E1028 23:23:03.988018 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.988183 kubelet[2718]: E1028 23:23:03.988171 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.988183 kubelet[2718]: W1028 23:23:03.988181 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.988231 kubelet[2718]: E1028 23:23:03.988189 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.988374 kubelet[2718]: E1028 23:23:03.988361 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.988374 kubelet[2718]: W1028 23:23:03.988372 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.988431 kubelet[2718]: E1028 23:23:03.988380 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.989581 kubelet[2718]: E1028 23:23:03.989204 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.989581 kubelet[2718]: W1028 23:23:03.989230 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.989581 kubelet[2718]: E1028 23:23:03.989243 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.989581 kubelet[2718]: E1028 23:23:03.989422 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.989581 kubelet[2718]: W1028 23:23:03.989431 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.989581 kubelet[2718]: E1028 23:23:03.989440 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.989877 kubelet[2718]: E1028 23:23:03.989855 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.989950 kubelet[2718]: W1028 23:23:03.989935 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.990008 kubelet[2718]: E1028 23:23:03.989996 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.990257 kubelet[2718]: E1028 23:23:03.990241 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.990322 kubelet[2718]: W1028 23:23:03.990310 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.990391 kubelet[2718]: E1028 23:23:03.990378 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.990623 kubelet[2718]: E1028 23:23:03.990608 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.990689 kubelet[2718]: W1028 23:23:03.990677 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.990742 kubelet[2718]: E1028 23:23:03.990732 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.990757 systemd[1]: Started cri-containerd-3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87.scope - libcontainer container 3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87. Oct 28 23:23:03.991565 kubelet[2718]: E1028 23:23:03.991449 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.991565 kubelet[2718]: W1028 23:23:03.991463 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.991565 kubelet[2718]: E1028 23:23:03.991475 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.992156 kubelet[2718]: E1028 23:23:03.991989 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.992304 kubelet[2718]: W1028 23:23:03.992229 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.992676 kubelet[2718]: E1028 23:23:03.992391 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.993037 kubelet[2718]: E1028 23:23:03.993018 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.993169 kubelet[2718]: W1028 23:23:03.993153 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.993247 kubelet[2718]: E1028 23:23:03.993234 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.993797 kubelet[2718]: E1028 23:23:03.993527 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.993797 kubelet[2718]: W1028 23:23:03.993554 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.993797 kubelet[2718]: E1028 23:23:03.993568 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.994010 kubelet[2718]: E1028 23:23:03.993994 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.994243 kubelet[2718]: W1028 23:23:03.994215 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.994311 kubelet[2718]: E1028 23:23:03.994299 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:03.996109 kubelet[2718]: E1028 23:23:03.996031 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 28 23:23:03.996109 kubelet[2718]: W1028 23:23:03.996054 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 28 23:23:03.996109 kubelet[2718]: E1028 23:23:03.996076 2718 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 28 23:23:04.030491 containerd[1565]: time="2025-10-28T23:23:04.030453969Z" level=info msg="StartContainer for \"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\" returns successfully" Oct 28 23:23:04.041030 systemd[1]: cri-containerd-3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87.scope: Deactivated successfully. Oct 28 23:23:04.043186 systemd[1]: cri-containerd-3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87.scope: Consumed 30ms CPU time, 6.3M memory peak, 4.1M written to disk. Oct 28 23:23:04.051783 containerd[1565]: time="2025-10-28T23:23:04.051727687Z" level=info msg="received exit event container_id:\"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\" id:\"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\" pid:3465 exited_at:{seconds:1761693784 nanos:46976329}" Oct 28 23:23:04.057943 containerd[1565]: time="2025-10-28T23:23:04.057893919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\" id:\"3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87\" pid:3465 exited_at:{seconds:1761693784 nanos:46976329}" Oct 28 23:23:04.092987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3305ff41e7622d4dc6739ef0096957e09afc15ef6037c8c3643ceb3a01446e87-rootfs.mount: Deactivated successfully. Oct 28 23:23:04.772570 kubelet[2718]: E1028 23:23:04.772505 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:04.896160 kubelet[2718]: E1028 23:23:04.895708 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:04.896705 kubelet[2718]: E1028 23:23:04.896666 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:04.897826 containerd[1565]: time="2025-10-28T23:23:04.897798490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 28 23:23:06.772141 kubelet[2718]: E1028 23:23:06.772075 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:07.786586 containerd[1565]: time="2025-10-28T23:23:07.786529269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:07.800471 containerd[1565]: time="2025-10-28T23:23:07.800428473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 28 23:23:07.814781 containerd[1565]: time="2025-10-28T23:23:07.814709153Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:07.829646 containerd[1565]: time="2025-10-28T23:23:07.829576052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:07.830323 containerd[1565]: time="2025-10-28T23:23:07.830184391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.931572013s" Oct 28 23:23:07.830323 containerd[1565]: time="2025-10-28T23:23:07.830222155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 28 23:23:07.834406 containerd[1565]: time="2025-10-28T23:23:07.834306116Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 28 23:23:07.852805 containerd[1565]: time="2025-10-28T23:23:07.852724842Z" level=info msg="Container f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:07.860740 containerd[1565]: time="2025-10-28T23:23:07.860681983Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\"" Oct 28 23:23:07.864336 containerd[1565]: time="2025-10-28T23:23:07.861499663Z" level=info msg="StartContainer for \"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\"" Oct 28 23:23:07.864336 containerd[1565]: time="2025-10-28T23:23:07.862884479Z" level=info msg="connecting to shim f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e" address="unix:///run/containerd/s/448cea6ffc4d0ed1d2d2078a971d2378e4de44921cc6016b2078dbc7a3b98cfd" protocol=ttrpc version=3 Oct 28 23:23:07.884291 systemd[1]: Started cri-containerd-f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e.scope - libcontainer container f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e. Oct 28 23:23:07.923912 containerd[1565]: time="2025-10-28T23:23:07.923858580Z" level=info msg="StartContainer for \"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\" returns successfully" Oct 28 23:23:08.475802 systemd[1]: cri-containerd-f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e.scope: Deactivated successfully. Oct 28 23:23:08.476079 systemd[1]: cri-containerd-f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e.scope: Consumed 455ms CPU time, 179.9M memory peak, 2.9M read from disk, 165.9M written to disk. Oct 28 23:23:08.477154 containerd[1565]: time="2025-10-28T23:23:08.477088970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\" id:\"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\" pid:3531 exited_at:{seconds:1761693788 nanos:476807863}" Oct 28 23:23:08.477283 containerd[1565]: time="2025-10-28T23:23:08.477184499Z" level=info msg="received exit event container_id:\"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\" id:\"f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e\" pid:3531 exited_at:{seconds:1761693788 nanos:476807863}" Oct 28 23:23:08.490717 kubelet[2718]: I1028 23:23:08.490682 2718 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 28 23:23:08.509883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f17fa32bba408c98903f3aafe35a08a91b1705447dd176f3466e85fca0eb700e-rootfs.mount: Deactivated successfully. Oct 28 23:23:08.593211 systemd[1]: Created slice kubepods-besteffort-podc8b3ca1f_9314_4a0e_9dc4_82d040dc18d2.slice - libcontainer container kubepods-besteffort-podc8b3ca1f_9314_4a0e_9dc4_82d040dc18d2.slice. Oct 28 23:23:08.605045 systemd[1]: Created slice kubepods-burstable-pod0976481c_b05f_4bb8_b3b4_26cc6309b8dc.slice - libcontainer container kubepods-burstable-pod0976481c_b05f_4bb8_b3b4_26cc6309b8dc.slice. Oct 28 23:23:08.611639 systemd[1]: Created slice kubepods-besteffort-pod9bfa4765_ea23_42f8_b547_3a1e13f95156.slice - libcontainer container kubepods-besteffort-pod9bfa4765_ea23_42f8_b547_3a1e13f95156.slice. Oct 28 23:23:08.620371 systemd[1]: Created slice kubepods-besteffort-podc39ad0d2_f1ab_413c_a4c2_0f81171d2cd7.slice - libcontainer container kubepods-besteffort-podc39ad0d2_f1ab_413c_a4c2_0f81171d2cd7.slice. Oct 28 23:23:08.621135 kubelet[2718]: I1028 23:23:08.621075 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-backend-key-pair\") pod \"whisker-5d5fbb878b-rjkkr\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " pod="calico-system/whisker-5d5fbb878b-rjkkr" Oct 28 23:23:08.621345 kubelet[2718]: I1028 23:23:08.621137 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4d309fd-dc09-4824-8579-3a6f396ee7ce-config\") pod \"goldmane-666569f655-s5mgj\" (UID: \"f4d309fd-dc09-4824-8579-3a6f396ee7ce\") " pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:08.621345 kubelet[2718]: I1028 23:23:08.621158 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxk98\" (UniqueName: \"kubernetes.io/projected/c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2-kube-api-access-pxk98\") pod \"calico-kube-controllers-5fcfcd784d-qg55p\" (UID: \"c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2\") " pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" Oct 28 23:23:08.621345 kubelet[2718]: I1028 23:23:08.621181 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2l4s\" (UniqueName: \"kubernetes.io/projected/9bfa4765-ea23-42f8-b547-3a1e13f95156-kube-api-access-q2l4s\") pod \"whisker-5d5fbb878b-rjkkr\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " pod="calico-system/whisker-5d5fbb878b-rjkkr" Oct 28 23:23:08.621345 kubelet[2718]: I1028 23:23:08.621198 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7-calico-apiserver-certs\") pod \"calico-apiserver-bd9c664b8-n79zh\" (UID: \"c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7\") " pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" Oct 28 23:23:08.621345 kubelet[2718]: I1028 23:23:08.621219 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2-tigera-ca-bundle\") pod \"calico-kube-controllers-5fcfcd784d-qg55p\" (UID: \"c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2\") " pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" Oct 28 23:23:08.621599 kubelet[2718]: I1028 23:23:08.621234 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcf9\" (UniqueName: \"kubernetes.io/projected/231f92ee-a43d-4793-84d1-fee8282bd19b-kube-api-access-xjcf9\") pod \"calico-apiserver-bd9c664b8-7445h\" (UID: \"231f92ee-a43d-4793-84d1-fee8282bd19b\") " pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" Oct 28 23:23:08.621599 kubelet[2718]: I1028 23:23:08.621252 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj6w\" (UniqueName: \"kubernetes.io/projected/c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7-kube-api-access-cbj6w\") pod \"calico-apiserver-bd9c664b8-n79zh\" (UID: \"c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7\") " pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" Oct 28 23:23:08.621599 kubelet[2718]: I1028 23:23:08.621268 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4d309fd-dc09-4824-8579-3a6f396ee7ce-goldmane-ca-bundle\") pod \"goldmane-666569f655-s5mgj\" (UID: \"f4d309fd-dc09-4824-8579-3a6f396ee7ce\") " pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:08.621599 kubelet[2718]: I1028 23:23:08.621293 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwkf\" (UniqueName: \"kubernetes.io/projected/0976481c-b05f-4bb8-b3b4-26cc6309b8dc-kube-api-access-pmwkf\") pod \"coredns-674b8bbfcf-nrbhn\" (UID: \"0976481c-b05f-4bb8-b3b4-26cc6309b8dc\") " pod="kube-system/coredns-674b8bbfcf-nrbhn" Oct 28 23:23:08.621599 kubelet[2718]: I1028 23:23:08.621311 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600f9b27-f2ba-4525-b4d0-74dcb8a935ee-config-volume\") pod \"coredns-674b8bbfcf-dvm7p\" (UID: \"600f9b27-f2ba-4525-b4d0-74dcb8a935ee\") " pod="kube-system/coredns-674b8bbfcf-dvm7p" Oct 28 23:23:08.621744 kubelet[2718]: I1028 23:23:08.621330 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-ca-bundle\") pod \"whisker-5d5fbb878b-rjkkr\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " pod="calico-system/whisker-5d5fbb878b-rjkkr" Oct 28 23:23:08.621744 kubelet[2718]: I1028 23:23:08.621346 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/231f92ee-a43d-4793-84d1-fee8282bd19b-calico-apiserver-certs\") pod \"calico-apiserver-bd9c664b8-7445h\" (UID: \"231f92ee-a43d-4793-84d1-fee8282bd19b\") " pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" Oct 28 23:23:08.621744 kubelet[2718]: I1028 23:23:08.621364 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f4d309fd-dc09-4824-8579-3a6f396ee7ce-goldmane-key-pair\") pod \"goldmane-666569f655-s5mgj\" (UID: \"f4d309fd-dc09-4824-8579-3a6f396ee7ce\") " pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:08.621744 kubelet[2718]: I1028 23:23:08.621381 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq46n\" (UniqueName: \"kubernetes.io/projected/f4d309fd-dc09-4824-8579-3a6f396ee7ce-kube-api-access-qq46n\") pod \"goldmane-666569f655-s5mgj\" (UID: \"f4d309fd-dc09-4824-8579-3a6f396ee7ce\") " pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:08.621744 kubelet[2718]: I1028 23:23:08.621395 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0976481c-b05f-4bb8-b3b4-26cc6309b8dc-config-volume\") pod \"coredns-674b8bbfcf-nrbhn\" (UID: \"0976481c-b05f-4bb8-b3b4-26cc6309b8dc\") " pod="kube-system/coredns-674b8bbfcf-nrbhn" Oct 28 23:23:08.621970 kubelet[2718]: I1028 23:23:08.621414 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4n8\" (UniqueName: \"kubernetes.io/projected/600f9b27-f2ba-4525-b4d0-74dcb8a935ee-kube-api-access-pl4n8\") pod \"coredns-674b8bbfcf-dvm7p\" (UID: \"600f9b27-f2ba-4525-b4d0-74dcb8a935ee\") " pod="kube-system/coredns-674b8bbfcf-dvm7p" Oct 28 23:23:08.626565 systemd[1]: Created slice kubepods-besteffort-pod231f92ee_a43d_4793_84d1_fee8282bd19b.slice - libcontainer container kubepods-besteffort-pod231f92ee_a43d_4793_84d1_fee8282bd19b.slice. Oct 28 23:23:08.632754 systemd[1]: Created slice kubepods-besteffort-podf4d309fd_dc09_4824_8579_3a6f396ee7ce.slice - libcontainer container kubepods-besteffort-podf4d309fd_dc09_4824_8579_3a6f396ee7ce.slice. Oct 28 23:23:08.638608 systemd[1]: Created slice kubepods-burstable-pod600f9b27_f2ba_4525_b4d0_74dcb8a935ee.slice - libcontainer container kubepods-burstable-pod600f9b27_f2ba_4525_b4d0_74dcb8a935ee.slice. Oct 28 23:23:08.781416 systemd[1]: Created slice kubepods-besteffort-pod74371a3d_9f28_4cd9_84c2_dcf19d44a64f.slice - libcontainer container kubepods-besteffort-pod74371a3d_9f28_4cd9_84c2_dcf19d44a64f.slice. Oct 28 23:23:08.784317 containerd[1565]: time="2025-10-28T23:23:08.784272283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnlxb,Uid:74371a3d-9f28-4cd9-84c2-dcf19d44a64f,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:08.876869 containerd[1565]: time="2025-10-28T23:23:08.876818466Z" level=error msg="Failed to destroy network for sandbox \"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:08.878492 systemd[1]: run-netns-cni\x2df2ee3308\x2d43a2\x2dde39\x2ded86\x2da10a6e26945d.mount: Deactivated successfully. Oct 28 23:23:08.885998 containerd[1565]: time="2025-10-28T23:23:08.885891567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnlxb,Uid:74371a3d-9f28-4cd9-84c2-dcf19d44a64f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:08.895144 kubelet[2718]: E1028 23:23:08.893202 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:08.895144 kubelet[2718]: E1028 23:23:08.893608 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:23:08.895144 kubelet[2718]: E1028 23:23:08.893632 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hnlxb" Oct 28 23:23:08.895403 kubelet[2718]: E1028 23:23:08.893693 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb0c63f0481127ca88633c7d619a8fb67fa457327ab97775786c300311e5bd1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:08.899528 containerd[1565]: time="2025-10-28T23:23:08.899485697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcfcd784d-qg55p,Uid:c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:08.908411 kubelet[2718]: E1028 23:23:08.908368 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:08.908907 kubelet[2718]: E1028 23:23:08.908870 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:08.910577 containerd[1565]: time="2025-10-28T23:23:08.910142468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 28 23:23:08.910577 containerd[1565]: time="2025-10-28T23:23:08.910211915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrbhn,Uid:0976481c-b05f-4bb8-b3b4-26cc6309b8dc,Namespace:kube-system,Attempt:0,}" Oct 28 23:23:08.916230 containerd[1565]: time="2025-10-28T23:23:08.916178521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d5fbb878b-rjkkr,Uid:9bfa4765-ea23-42f8-b547-3a1e13f95156,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:08.927002 containerd[1565]: time="2025-10-28T23:23:08.926969265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-n79zh,Uid:c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:23:08.929588 containerd[1565]: time="2025-10-28T23:23:08.929553510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-7445h,Uid:231f92ee-a43d-4793-84d1-fee8282bd19b,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:23:08.938032 containerd[1565]: time="2025-10-28T23:23:08.937981430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s5mgj,Uid:f4d309fd-dc09-4824-8579-3a6f396ee7ce,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:08.941232 kubelet[2718]: E1028 23:23:08.941201 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:08.942809 containerd[1565]: time="2025-10-28T23:23:08.942689597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvm7p,Uid:600f9b27-f2ba-4525-b4d0-74dcb8a935ee,Namespace:kube-system,Attempt:0,}" Oct 28 23:23:09.001758 containerd[1565]: time="2025-10-28T23:23:09.001717037Z" level=error msg="Failed to destroy network for sandbox \"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.003099 containerd[1565]: time="2025-10-28T23:23:09.002986194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcfcd784d-qg55p,Uid:c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.003406 kubelet[2718]: E1028 23:23:09.003363 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.003465 kubelet[2718]: E1028 23:23:09.003425 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" Oct 28 23:23:09.003465 kubelet[2718]: E1028 23:23:09.003448 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" Oct 28 23:23:09.003556 kubelet[2718]: E1028 23:23:09.003502 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fcfcd784d-qg55p_calico-system(c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fcfcd784d-qg55p_calico-system(c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af7f68aadb8d8ccdc951e45a0e03422a80393722493b87e8f9eb3398bbdf0a98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" podUID="c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2" Oct 28 23:23:09.020766 containerd[1565]: time="2025-10-28T23:23:09.020715704Z" level=error msg="Failed to destroy network for sandbox \"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.022068 containerd[1565]: time="2025-10-28T23:23:09.022024024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrbhn,Uid:0976481c-b05f-4bb8-b3b4-26cc6309b8dc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.022717 kubelet[2718]: E1028 23:23:09.022289 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.022717 kubelet[2718]: E1028 23:23:09.022358 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrbhn" Oct 28 23:23:09.022717 kubelet[2718]: E1028 23:23:09.022380 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrbhn" Oct 28 23:23:09.022882 kubelet[2718]: E1028 23:23:09.022431 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nrbhn_kube-system(0976481c-b05f-4bb8-b3b4-26cc6309b8dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nrbhn_kube-system(0976481c-b05f-4bb8-b3b4-26cc6309b8dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76551908e83db64f257d5bf4c9ee44d9d333f1755bc8f546efd846d21fdad401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nrbhn" podUID="0976481c-b05f-4bb8-b3b4-26cc6309b8dc" Oct 28 23:23:09.031604 containerd[1565]: time="2025-10-28T23:23:09.031494414Z" level=error msg="Failed to destroy network for sandbox \"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.035109 containerd[1565]: time="2025-10-28T23:23:09.034961693Z" level=error msg="Failed to destroy network for sandbox \"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.035289 containerd[1565]: time="2025-10-28T23:23:09.035164312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s5mgj,Uid:f4d309fd-dc09-4824-8579-3a6f396ee7ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.035566 kubelet[2718]: E1028 23:23:09.035519 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.035625 kubelet[2718]: E1028 23:23:09.035581 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:09.035625 kubelet[2718]: E1028 23:23:09.035602 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s5mgj" Oct 28 23:23:09.035678 kubelet[2718]: E1028 23:23:09.035646 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s5mgj_calico-system(f4d309fd-dc09-4824-8579-3a6f396ee7ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s5mgj_calico-system(f4d309fd-dc09-4824-8579-3a6f396ee7ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdf46f178ee564333abd6ce15e9c954cf947546efa0e6678ceba6cdc2ad83e22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s5mgj" podUID="f4d309fd-dc09-4824-8579-3a6f396ee7ce" Oct 28 23:23:09.037098 containerd[1565]: time="2025-10-28T23:23:09.036890750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-7445h,Uid:231f92ee-a43d-4793-84d1-fee8282bd19b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.039182 kubelet[2718]: E1028 23:23:09.039143 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.039268 kubelet[2718]: E1028 23:23:09.039248 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" Oct 28 23:23:09.039314 kubelet[2718]: E1028 23:23:09.039272 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" Oct 28 23:23:09.039359 kubelet[2718]: E1028 23:23:09.039324 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd9c664b8-7445h_calico-apiserver(231f92ee-a43d-4793-84d1-fee8282bd19b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd9c664b8-7445h_calico-apiserver(231f92ee-a43d-4793-84d1-fee8282bd19b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5daf3c2285f7729f7edcf63e8357b68aa97de57586730b3b117d0f47cf68430\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b" Oct 28 23:23:09.041979 containerd[1565]: time="2025-10-28T23:23:09.041944375Z" level=error msg="Failed to destroy network for sandbox \"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.042961 containerd[1565]: time="2025-10-28T23:23:09.042929025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-n79zh,Uid:c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.043170 kubelet[2718]: E1028 23:23:09.043113 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.043223 kubelet[2718]: E1028 23:23:09.043190 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" Oct 28 23:23:09.043223 kubelet[2718]: E1028 23:23:09.043211 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" Oct 28 23:23:09.043277 kubelet[2718]: E1028 23:23:09.043255 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bd9c664b8-n79zh_calico-apiserver(c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bd9c664b8-n79zh_calico-apiserver(c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae37eb6cc7f0de6a41d8ed23df9f1b9058d9826fad4f9b89461603f25aebc235\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:09.054116 containerd[1565]: time="2025-10-28T23:23:09.054063569Z" level=error msg="Failed to destroy network for sandbox \"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.054707 containerd[1565]: time="2025-10-28T23:23:09.054670305Z" level=error msg="Failed to destroy network for sandbox \"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.055191 containerd[1565]: time="2025-10-28T23:23:09.055158749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvm7p,Uid:600f9b27-f2ba-4525-b4d0-74dcb8a935ee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.055809 kubelet[2718]: E1028 23:23:09.055380 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.055809 kubelet[2718]: E1028 23:23:09.055429 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvm7p" Oct 28 23:23:09.055809 kubelet[2718]: E1028 23:23:09.055449 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvm7p" Oct 28 23:23:09.055981 kubelet[2718]: E1028 23:23:09.055490 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvm7p_kube-system(600f9b27-f2ba-4525-b4d0-74dcb8a935ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvm7p_kube-system(600f9b27-f2ba-4525-b4d0-74dcb8a935ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"792ea0473205c2e2a97e02bf8ccf08d0deec8d46b5f8d890fbdc79d3327982e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvm7p" podUID="600f9b27-f2ba-4525-b4d0-74dcb8a935ee" Oct 28 23:23:09.056045 containerd[1565]: time="2025-10-28T23:23:09.055880016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d5fbb878b-rjkkr,Uid:9bfa4765-ea23-42f8-b547-3a1e13f95156,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.056275 kubelet[2718]: E1028 23:23:09.056239 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 28 23:23:09.056325 kubelet[2718]: E1028 23:23:09.056282 2718 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d5fbb878b-rjkkr" Oct 28 23:23:09.056325 kubelet[2718]: E1028 23:23:09.056300 2718 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d5fbb878b-rjkkr" Oct 28 23:23:09.056411 kubelet[2718]: E1028 23:23:09.056345 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d5fbb878b-rjkkr_calico-system(9bfa4765-ea23-42f8-b547-3a1e13f95156)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d5fbb878b-rjkkr_calico-system(9bfa4765-ea23-42f8-b547-3a1e13f95156)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04cf33eb612525be98eb750970f1cc24db1e4e14d2649de3db1a66664e71f456\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d5fbb878b-rjkkr" podUID="9bfa4765-ea23-42f8-b547-3a1e13f95156" Oct 28 23:23:09.855020 systemd[1]: run-netns-cni\x2d39842864\x2d5225\x2dc2f2\x2d8a04\x2d2cd7365da171.mount: Deactivated successfully. Oct 28 23:23:09.855159 systemd[1]: run-netns-cni\x2d745272e1\x2d2c19\x2d2fed\x2dc97d\x2d3a0ca3fcd333.mount: Deactivated successfully. Oct 28 23:23:09.855211 systemd[1]: run-netns-cni\x2d179eba5d\x2d3ef9\x2d6143\x2d3198\x2dfd38f0ae73ed.mount: Deactivated successfully. Oct 28 23:23:12.914560 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:48320.service - OpenSSH per-connection server daemon (10.0.0.1:48320). Oct 28 23:23:12.985995 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 48320 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:12.988227 sshd-session[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:12.992505 systemd-logind[1542]: New session 9 of user core. Oct 28 23:23:12.998288 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 28 23:23:13.061148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773720228.mount: Deactivated successfully. Oct 28 23:23:13.254275 sshd[3843]: Connection closed by 10.0.0.1 port 48320 Oct 28 23:23:13.253976 sshd-session[3839]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:13.258355 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:48320.service: Deactivated successfully. Oct 28 23:23:13.260747 systemd[1]: session-9.scope: Deactivated successfully. Oct 28 23:23:13.261751 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Oct 28 23:23:13.263232 systemd-logind[1542]: Removed session 9. Oct 28 23:23:13.306907 containerd[1565]: time="2025-10-28T23:23:13.306837884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:13.312591 containerd[1565]: time="2025-10-28T23:23:13.312540430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 28 23:23:13.321916 containerd[1565]: time="2025-10-28T23:23:13.321860352Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:13.325731 containerd[1565]: time="2025-10-28T23:23:13.325686584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 28 23:23:13.327011 containerd[1565]: time="2025-10-28T23:23:13.326153183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.415825897s" Oct 28 23:23:13.327011 containerd[1565]: time="2025-10-28T23:23:13.326176905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 28 23:23:13.384809 containerd[1565]: time="2025-10-28T23:23:13.384741411Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 28 23:23:13.393839 containerd[1565]: time="2025-10-28T23:23:13.393782990Z" level=info msg="Container 3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:13.396478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994654192.mount: Deactivated successfully. Oct 28 23:23:13.405066 containerd[1565]: time="2025-10-28T23:23:13.405006187Z" level=info msg="CreateContainer within sandbox \"293f5c5a84cb40484c6a5418e808823b231e676fc6edbe1709147395ea150312\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\"" Oct 28 23:23:13.405488 containerd[1565]: time="2025-10-28T23:23:13.405464624Z" level=info msg="StartContainer for \"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\"" Oct 28 23:23:13.407162 containerd[1565]: time="2025-10-28T23:23:13.407112519Z" level=info msg="connecting to shim 3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035" address="unix:///run/containerd/s/448cea6ffc4d0ed1d2d2078a971d2378e4de44921cc6016b2078dbc7a3b98cfd" protocol=ttrpc version=3 Oct 28 23:23:13.434285 systemd[1]: Started cri-containerd-3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035.scope - libcontainer container 3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035. Oct 28 23:23:13.466849 containerd[1565]: time="2025-10-28T23:23:13.466810638Z" level=info msg="StartContainer for \"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\" returns successfully" Oct 28 23:23:13.587290 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 28 23:23:13.587391 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 28 23:23:13.758648 kubelet[2718]: I1028 23:23:13.758598 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-ca-bundle\") pod \"9bfa4765-ea23-42f8-b547-3a1e13f95156\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " Oct 28 23:23:13.758648 kubelet[2718]: I1028 23:23:13.758654 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2l4s\" (UniqueName: \"kubernetes.io/projected/9bfa4765-ea23-42f8-b547-3a1e13f95156-kube-api-access-q2l4s\") pod \"9bfa4765-ea23-42f8-b547-3a1e13f95156\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " Oct 28 23:23:13.759032 kubelet[2718]: I1028 23:23:13.758680 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-backend-key-pair\") pod \"9bfa4765-ea23-42f8-b547-3a1e13f95156\" (UID: \"9bfa4765-ea23-42f8-b547-3a1e13f95156\") " Oct 28 23:23:13.780260 kubelet[2718]: I1028 23:23:13.780176 2718 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfa4765-ea23-42f8-b547-3a1e13f95156-kube-api-access-q2l4s" (OuterVolumeSpecName: "kube-api-access-q2l4s") pod "9bfa4765-ea23-42f8-b547-3a1e13f95156" (UID: "9bfa4765-ea23-42f8-b547-3a1e13f95156"). InnerVolumeSpecName "kube-api-access-q2l4s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 28 23:23:13.780260 kubelet[2718]: I1028 23:23:13.780175 2718 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9bfa4765-ea23-42f8-b547-3a1e13f95156" (UID: "9bfa4765-ea23-42f8-b547-3a1e13f95156"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 28 23:23:13.785481 kubelet[2718]: I1028 23:23:13.785445 2718 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9bfa4765-ea23-42f8-b547-3a1e13f95156" (UID: "9bfa4765-ea23-42f8-b547-3a1e13f95156"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 28 23:23:13.859621 kubelet[2718]: I1028 23:23:13.859377 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 28 23:23:13.859621 kubelet[2718]: I1028 23:23:13.859413 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2l4s\" (UniqueName: \"kubernetes.io/projected/9bfa4765-ea23-42f8-b547-3a1e13f95156-kube-api-access-q2l4s\") on node \"localhost\" DevicePath \"\"" Oct 28 23:23:13.859621 kubelet[2718]: I1028 23:23:13.859423 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bfa4765-ea23-42f8-b547-3a1e13f95156-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 28 23:23:13.923139 kubelet[2718]: E1028 23:23:13.923063 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:13.933380 systemd[1]: Removed slice kubepods-besteffort-pod9bfa4765_ea23_42f8_b547_3a1e13f95156.slice - libcontainer container kubepods-besteffort-pod9bfa4765_ea23_42f8_b547_3a1e13f95156.slice. Oct 28 23:23:13.942408 kubelet[2718]: I1028 23:23:13.941456 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-67t2v" podStartSLOduration=1.524292518 podStartE2EDuration="15.941439987s" podCreationTimestamp="2025-10-28 23:22:58 +0000 UTC" firstStartedPulling="2025-10-28 23:22:58.910398307 +0000 UTC m=+27.249587840" lastFinishedPulling="2025-10-28 23:23:13.327545776 +0000 UTC m=+41.666735309" observedRunningTime="2025-10-28 23:23:13.939735007 +0000 UTC m=+42.278924580" watchObservedRunningTime="2025-10-28 23:23:13.941439987 +0000 UTC m=+42.280629520" Oct 28 23:23:13.992886 systemd[1]: Created slice kubepods-besteffort-pod6090c7c2_848a_41cc_9996_0be408373b53.slice - libcontainer container kubepods-besteffort-pod6090c7c2_848a_41cc_9996_0be408373b53.slice. Oct 28 23:23:14.061423 kubelet[2718]: I1028 23:23:14.061375 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85hk\" (UniqueName: \"kubernetes.io/projected/6090c7c2-848a-41cc-9996-0be408373b53-kube-api-access-v85hk\") pod \"whisker-555d9c847b-pmc7r\" (UID: \"6090c7c2-848a-41cc-9996-0be408373b53\") " pod="calico-system/whisker-555d9c847b-pmc7r" Oct 28 23:23:14.061423 kubelet[2718]: I1028 23:23:14.061429 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6090c7c2-848a-41cc-9996-0be408373b53-whisker-backend-key-pair\") pod \"whisker-555d9c847b-pmc7r\" (UID: \"6090c7c2-848a-41cc-9996-0be408373b53\") " pod="calico-system/whisker-555d9c847b-pmc7r" Oct 28 23:23:14.061559 kubelet[2718]: I1028 23:23:14.061451 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6090c7c2-848a-41cc-9996-0be408373b53-whisker-ca-bundle\") pod \"whisker-555d9c847b-pmc7r\" (UID: \"6090c7c2-848a-41cc-9996-0be408373b53\") " pod="calico-system/whisker-555d9c847b-pmc7r" Oct 28 23:23:14.063097 systemd[1]: var-lib-kubelet-pods-9bfa4765\x2dea23\x2d42f8\x2db547\x2d3a1e13f95156-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2l4s.mount: Deactivated successfully. Oct 28 23:23:14.063450 systemd[1]: var-lib-kubelet-pods-9bfa4765\x2dea23\x2d42f8\x2db547\x2d3a1e13f95156-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 28 23:23:14.088909 containerd[1565]: time="2025-10-28T23:23:14.088846124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\" id:\"5066641dd78f162dc8c947feb1786d7f060dba17b2f426d3135a527d7ea365a6\" pid:3933 exit_status:1 exited_at:{seconds:1761693794 nanos:88556381}" Oct 28 23:23:14.298484 containerd[1565]: time="2025-10-28T23:23:14.298436319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555d9c847b-pmc7r,Uid:6090c7c2-848a-41cc-9996-0be408373b53,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:14.453178 systemd-networkd[1480]: calidf7308c3e5f: Link UP Oct 28 23:23:14.453362 systemd-networkd[1480]: calidf7308c3e5f: Gained carrier Oct 28 23:23:14.466210 containerd[1565]: 2025-10-28 23:23:14.321 [INFO][3947] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 28 23:23:14.466210 containerd[1565]: 2025-10-28 23:23:14.351 [INFO][3947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--555d9c847b--pmc7r-eth0 whisker-555d9c847b- calico-system 6090c7c2-848a-41cc-9996-0be408373b53 990 0 2025-10-28 23:23:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:555d9c847b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-555d9c847b-pmc7r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidf7308c3e5f [] [] }} ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-" Oct 28 23:23:14.466210 containerd[1565]: 2025-10-28 23:23:14.351 [INFO][3947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.466210 containerd[1565]: 2025-10-28 23:23:14.412 [INFO][3962] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" HandleID="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Workload="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.412 [INFO][3962] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" HandleID="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Workload="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-555d9c847b-pmc7r", "timestamp":"2025-10-28 23:23:14.412019436 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.412 [INFO][3962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.412 [INFO][3962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.412 [INFO][3962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.422 [INFO][3962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" host="localhost" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.427 [INFO][3962] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.431 [INFO][3962] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.432 [INFO][3962] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.434 [INFO][3962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:14.466581 containerd[1565]: 2025-10-28 23:23:14.434 [INFO][3962] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" host="localhost" Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.436 [INFO][3962] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129 Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.440 [INFO][3962] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" host="localhost" Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.444 [INFO][3962] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" host="localhost" Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.444 [INFO][3962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" host="localhost" Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.444 [INFO][3962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:14.466794 containerd[1565]: 2025-10-28 23:23:14.444 [INFO][3962] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" HandleID="k8s-pod-network.d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Workload="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.466902 containerd[1565]: 2025-10-28 23:23:14.446 [INFO][3947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--555d9c847b--pmc7r-eth0", GenerateName:"whisker-555d9c847b-", Namespace:"calico-system", SelfLink:"", UID:"6090c7c2-848a-41cc-9996-0be408373b53", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555d9c847b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-555d9c847b-pmc7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf7308c3e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:14.466902 containerd[1565]: 2025-10-28 23:23:14.447 [INFO][3947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.466967 containerd[1565]: 2025-10-28 23:23:14.447 [INFO][3947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf7308c3e5f ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.466967 containerd[1565]: 2025-10-28 23:23:14.453 [INFO][3947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.467002 containerd[1565]: 2025-10-28 23:23:14.454 [INFO][3947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--555d9c847b--pmc7r-eth0", GenerateName:"whisker-555d9c847b-", Namespace:"calico-system", SelfLink:"", UID:"6090c7c2-848a-41cc-9996-0be408373b53", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"555d9c847b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129", Pod:"whisker-555d9c847b-pmc7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidf7308c3e5f", MAC:"42:f7:bf:97:1a:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:14.467044 containerd[1565]: 2025-10-28 23:23:14.463 [INFO][3947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" Namespace="calico-system" Pod="whisker-555d9c847b-pmc7r" WorkloadEndpoint="localhost-k8s-whisker--555d9c847b--pmc7r-eth0" Oct 28 23:23:14.510876 containerd[1565]: time="2025-10-28T23:23:14.510666644Z" level=info msg="connecting to shim d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129" address="unix:///run/containerd/s/79933577de82fa4663816852169e9816fb6ff92ec6608b02ca0ba04b7aaabcd7" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:14.532263 systemd[1]: Started cri-containerd-d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129.scope - libcontainer container d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129. Oct 28 23:23:14.542325 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:14.560622 containerd[1565]: time="2025-10-28T23:23:14.560435124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-555d9c847b-pmc7r,Uid:6090c7c2-848a-41cc-9996-0be408373b53,Namespace:calico-system,Attempt:0,} returns sandbox id \"d15cbe6fbfb6aec27c92c05b020cf17be7ba794b90a42d85051a4c7090bb8129\"" Oct 28 23:23:14.563408 containerd[1565]: time="2025-10-28T23:23:14.563189863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:23:14.785783 containerd[1565]: time="2025-10-28T23:23:14.785728809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:14.786599 containerd[1565]: time="2025-10-28T23:23:14.786560955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:23:14.786649 containerd[1565]: time="2025-10-28T23:23:14.786641321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:23:14.788854 kubelet[2718]: E1028 23:23:14.788808 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:23:14.790160 kubelet[2718]: E1028 23:23:14.789661 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:23:14.798955 kubelet[2718]: E1028 23:23:14.798879 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a00c4191deae4866a22f5a545e62360f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v85hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555d9c847b-pmc7r_calico-system(6090c7c2-848a-41cc-9996-0be408373b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:14.800987 containerd[1565]: time="2025-10-28T23:23:14.800959621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:23:14.928930 kubelet[2718]: E1028 23:23:14.928592 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:14.997747 containerd[1565]: time="2025-10-28T23:23:14.997694553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:15.007076 containerd[1565]: time="2025-10-28T23:23:15.006964879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:23:15.007226 containerd[1565]: time="2025-10-28T23:23:15.007051446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:23:15.007347 kubelet[2718]: E1028 23:23:15.007305 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:23:15.007402 kubelet[2718]: E1028 23:23:15.007355 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:23:15.007540 kubelet[2718]: E1028 23:23:15.007465 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v85hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555d9c847b-pmc7r_calico-system(6090c7c2-848a-41cc-9996-0be408373b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:15.008742 kubelet[2718]: E1028 23:23:15.008691 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555d9c847b-pmc7r" podUID="6090c7c2-848a-41cc-9996-0be408373b53" Oct 28 23:23:15.048413 containerd[1565]: time="2025-10-28T23:23:15.048368569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\" id:\"ab72ca0cd8d5e6bc391e6a421b83bd7994eac57972bc35e50ce93c099e33d1ea\" pid:4136 exit_status:1 exited_at:{seconds:1761693795 nanos:48029783}" Oct 28 23:23:15.225355 systemd-networkd[1480]: vxlan.calico: Link UP Oct 28 23:23:15.225541 systemd-networkd[1480]: vxlan.calico: Gained carrier Oct 28 23:23:15.656301 systemd-networkd[1480]: calidf7308c3e5f: Gained IPv6LL Oct 28 23:23:15.774451 kubelet[2718]: I1028 23:23:15.774407 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfa4765-ea23-42f8-b547-3a1e13f95156" path="/var/lib/kubelet/pods/9bfa4765-ea23-42f8-b547-3a1e13f95156/volumes" Oct 28 23:23:15.929836 kubelet[2718]: E1028 23:23:15.929703 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555d9c847b-pmc7r" podUID="6090c7c2-848a-41cc-9996-0be408373b53" Oct 28 23:23:16.875247 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Oct 28 23:23:18.268679 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:48324.service - OpenSSH per-connection server daemon (10.0.0.1:48324). Oct 28 23:23:18.340810 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 48324 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:18.342484 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:18.346426 systemd-logind[1542]: New session 10 of user core. Oct 28 23:23:18.360349 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 28 23:23:18.478998 sshd[4260]: Connection closed by 10.0.0.1 port 48324 Oct 28 23:23:18.479405 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:18.483962 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:48324.service: Deactivated successfully. Oct 28 23:23:18.485652 systemd[1]: session-10.scope: Deactivated successfully. Oct 28 23:23:18.487143 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Oct 28 23:23:18.487928 systemd-logind[1542]: Removed session 10. Oct 28 23:23:19.772617 containerd[1565]: time="2025-10-28T23:23:19.772573539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-n79zh,Uid:c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:23:19.773357 containerd[1565]: time="2025-10-28T23:23:19.773232625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-7445h,Uid:231f92ee-a43d-4793-84d1-fee8282bd19b,Namespace:calico-apiserver,Attempt:0,}" Oct 28 23:23:19.923041 systemd-networkd[1480]: calie21494a1b9d: Link UP Oct 28 23:23:19.923470 systemd-networkd[1480]: calie21494a1b9d: Gained carrier Oct 28 23:23:19.939613 containerd[1565]: 2025-10-28 23:23:19.850 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0 calico-apiserver-bd9c664b8- calico-apiserver c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7 898 0 2025-10-28 23:22:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd9c664b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd9c664b8-n79zh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie21494a1b9d [] [] }} ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-" Oct 28 23:23:19.939613 containerd[1565]: 2025-10-28 23:23:19.851 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.939613 containerd[1565]: 2025-10-28 23:23:19.880 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" HandleID="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Workload="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.881 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" HandleID="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Workload="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001176e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd9c664b8-n79zh", "timestamp":"2025-10-28 23:23:19.880949952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.881 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.881 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.881 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.892 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" host="localhost" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.896 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.900 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.902 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.904 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:19.939814 containerd[1565]: 2025-10-28 23:23:19.904 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" host="localhost" Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.906 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.910 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" host="localhost" Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" host="localhost" Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" host="localhost" Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:19.940017 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" HandleID="k8s-pod-network.afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Workload="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.940170 containerd[1565]: 2025-10-28 23:23:19.917 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0", GenerateName:"calico-apiserver-bd9c664b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd9c664b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd9c664b8-n79zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21494a1b9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:19.940224 containerd[1565]: 2025-10-28 23:23:19.917 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.940224 containerd[1565]: 2025-10-28 23:23:19.917 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie21494a1b9d ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.940224 containerd[1565]: 2025-10-28 23:23:19.924 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.940282 containerd[1565]: 2025-10-28 23:23:19.925 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0", GenerateName:"calico-apiserver-bd9c664b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd9c664b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c", Pod:"calico-apiserver-bd9c664b8-n79zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie21494a1b9d", MAC:"5e:b0:42:be:ea:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:19.940336 containerd[1565]: 2025-10-28 23:23:19.936 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-n79zh" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--n79zh-eth0" Oct 28 23:23:19.969508 containerd[1565]: time="2025-10-28T23:23:19.969404718Z" level=info msg="connecting to shim afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c" address="unix:///run/containerd/s/31de7d2b8cda2ba342f34d49e762d991a02e0d9da24d99fa87bf7b81b45dbda3" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:20.019340 systemd[1]: Started cri-containerd-afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c.scope - libcontainer container afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c. Oct 28 23:23:20.038326 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:20.043515 systemd-networkd[1480]: cali19cb98f5c78: Link UP Oct 28 23:23:20.045809 systemd-networkd[1480]: cali19cb98f5c78: Gained carrier Oct 28 23:23:20.064733 containerd[1565]: 2025-10-28 23:23:19.853 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0 calico-apiserver-bd9c664b8- calico-apiserver 231f92ee-a43d-4793-84d1-fee8282bd19b 894 0 2025-10-28 23:22:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bd9c664b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bd9c664b8-7445h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19cb98f5c78 [] [] }} ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-" Oct 28 23:23:20.064733 containerd[1565]: 2025-10-28 23:23:19.853 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.064733 containerd[1565]: 2025-10-28 23:23:19.885 [INFO][4305] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" HandleID="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Workload="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:19.885 [INFO][4305] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" HandleID="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Workload="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddc00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bd9c664b8-7445h", "timestamp":"2025-10-28 23:23:19.885484872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:19.885 [INFO][4305] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4305] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:19.915 [INFO][4305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:19.992 [INFO][4305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" host="localhost" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:20.004 [INFO][4305] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:20.010 [INFO][4305] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:20.014 [INFO][4305] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:20.017 [INFO][4305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:20.064929 containerd[1565]: 2025-10-28 23:23:20.017 [INFO][4305] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" host="localhost" Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.019 [INFO][4305] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2 Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.028 [INFO][4305] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" host="localhost" Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.035 [INFO][4305] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" host="localhost" Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.035 [INFO][4305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" host="localhost" Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.035 [INFO][4305] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:20.066384 containerd[1565]: 2025-10-28 23:23:20.035 [INFO][4305] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" HandleID="k8s-pod-network.f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Workload="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.067272 containerd[1565]: 2025-10-28 23:23:20.039 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0", GenerateName:"calico-apiserver-bd9c664b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"231f92ee-a43d-4793-84d1-fee8282bd19b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd9c664b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bd9c664b8-7445h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19cb98f5c78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:20.067347 containerd[1565]: 2025-10-28 23:23:20.041 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.067347 containerd[1565]: 2025-10-28 23:23:20.041 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19cb98f5c78 ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.067347 containerd[1565]: 2025-10-28 23:23:20.045 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.067410 containerd[1565]: 2025-10-28 23:23:20.046 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0", GenerateName:"calico-apiserver-bd9c664b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"231f92ee-a43d-4793-84d1-fee8282bd19b", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bd9c664b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2", Pod:"calico-apiserver-bd9c664b8-7445h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19cb98f5c78", MAC:"36:3d:01:fa:13:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:20.067455 containerd[1565]: 2025-10-28 23:23:20.062 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" Namespace="calico-apiserver" Pod="calico-apiserver-bd9c664b8-7445h" WorkloadEndpoint="localhost-k8s-calico--apiserver--bd9c664b8--7445h-eth0" Oct 28 23:23:20.105081 containerd[1565]: time="2025-10-28T23:23:20.091889993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-n79zh,Uid:c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"afbfeeab45af42702207dc1d30759500c0343c8ec990a1b2f854b9a1947d2a0c\"" Oct 28 23:23:20.105862 containerd[1565]: time="2025-10-28T23:23:20.097150637Z" level=info msg="connecting to shim f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2" address="unix:///run/containerd/s/e08a95f1a912b69885f44aa7e9653177cfc9033794ebf8d81671e7201d77dfcc" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:20.107757 containerd[1565]: time="2025-10-28T23:23:20.107715288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:23:20.151342 systemd[1]: Started cri-containerd-f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2.scope - libcontainer container f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2. Oct 28 23:23:20.175217 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:20.208413 containerd[1565]: time="2025-10-28T23:23:20.208345446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bd9c664b8-7445h,Uid:231f92ee-a43d-4793-84d1-fee8282bd19b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f02f509c171b980dff61484900e0eecf1b4372f08b18461c7682a9903ee38cc2\"" Oct 28 23:23:20.317455 containerd[1565]: time="2025-10-28T23:23:20.317232415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:20.318704 containerd[1565]: time="2025-10-28T23:23:20.318668434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:23:20.318770 containerd[1565]: time="2025-10-28T23:23:20.318728558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:20.318990 kubelet[2718]: E1028 23:23:20.318933 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:20.319298 kubelet[2718]: E1028 23:23:20.318989 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:20.319298 kubelet[2718]: E1028 23:23:20.319235 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbj6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bd9c664b8-n79zh_calico-apiserver(c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:20.319482 containerd[1565]: time="2025-10-28T23:23:20.319456289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:23:20.321449 kubelet[2718]: E1028 23:23:20.321400 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:20.544595 containerd[1565]: time="2025-10-28T23:23:20.544496129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:20.545496 containerd[1565]: time="2025-10-28T23:23:20.545462996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:23:20.545559 containerd[1565]: time="2025-10-28T23:23:20.545502879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:20.545708 kubelet[2718]: E1028 23:23:20.545671 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:20.545780 kubelet[2718]: E1028 23:23:20.545721 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:20.546161 kubelet[2718]: E1028 23:23:20.545859 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bd9c664b8-7445h_calico-apiserver(231f92ee-a43d-4793-84d1-fee8282bd19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:20.547078 kubelet[2718]: E1028 23:23:20.547030 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b" Oct 28 23:23:20.772326 containerd[1565]: time="2025-10-28T23:23:20.772171392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s5mgj,Uid:f4d309fd-dc09-4824-8579-3a6f396ee7ce,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:20.879592 systemd-networkd[1480]: cali1f1a0476359: Link UP Oct 28 23:23:20.880052 systemd-networkd[1480]: cali1f1a0476359: Gained carrier Oct 28 23:23:20.893693 containerd[1565]: 2025-10-28 23:23:20.812 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--s5mgj-eth0 goldmane-666569f655- calico-system f4d309fd-dc09-4824-8579-3a6f396ee7ce 895 0 2025-10-28 23:22:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-s5mgj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1f1a0476359 [] [] }} ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-" Oct 28 23:23:20.893693 containerd[1565]: 2025-10-28 23:23:20.812 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.893693 containerd[1565]: 2025-10-28 23:23:20.838 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" HandleID="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Workload="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.838 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" HandleID="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Workload="localhost-k8s-goldmane--666569f655--s5mgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000494100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-s5mgj", "timestamp":"2025-10-28 23:23:20.838080909 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.838 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.838 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.838 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.847 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" host="localhost" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.851 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.856 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.858 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.860 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:20.894083 containerd[1565]: 2025-10-28 23:23:20.860 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" host="localhost" Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.861 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65 Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.865 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" host="localhost" Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.870 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" host="localhost" Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.870 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" host="localhost" Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.871 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:20.894880 containerd[1565]: 2025-10-28 23:23:20.871 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" HandleID="k8s-pod-network.b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Workload="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.894999 containerd[1565]: 2025-10-28 23:23:20.875 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s5mgj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4d309fd-dc09-4824-8579-3a6f396ee7ce", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-s5mgj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f1a0476359", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:20.894999 containerd[1565]: 2025-10-28 23:23:20.875 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.895077 containerd[1565]: 2025-10-28 23:23:20.875 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f1a0476359 ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.895077 containerd[1565]: 2025-10-28 23:23:20.880 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.895280 containerd[1565]: 2025-10-28 23:23:20.881 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s5mgj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f4d309fd-dc09-4824-8579-3a6f396ee7ce", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65", Pod:"goldmane-666569f655-s5mgj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1f1a0476359", MAC:"8a:c4:d5:c1:e1:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:20.895356 containerd[1565]: 2025-10-28 23:23:20.888 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" Namespace="calico-system" Pod="goldmane-666569f655-s5mgj" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s5mgj-eth0" Oct 28 23:23:20.917773 containerd[1565]: time="2025-10-28T23:23:20.917727376Z" level=info msg="connecting to shim b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65" address="unix:///run/containerd/s/b4f00598c2e430b882c1c523735912c7faa6fee395e977515a26b0f178256729" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:20.940579 kubelet[2718]: E1028 23:23:20.940490 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b" Oct 28 23:23:20.943354 systemd[1]: Started cri-containerd-b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65.scope - libcontainer container b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65. Oct 28 23:23:20.943988 kubelet[2718]: E1028 23:23:20.943904 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:20.966075 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:20.990482 containerd[1565]: time="2025-10-28T23:23:20.990444804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s5mgj,Uid:f4d309fd-dc09-4824-8579-3a6f396ee7ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4e2d8ee0a799df132a129533ed98d8bcbaeb62720184b5026de8bf13d6c4a65\"" Oct 28 23:23:20.992105 containerd[1565]: time="2025-10-28T23:23:20.992070957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:23:21.032299 systemd-networkd[1480]: calie21494a1b9d: Gained IPv6LL Oct 28 23:23:21.206516 containerd[1565]: time="2025-10-28T23:23:21.206461418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:21.207447 containerd[1565]: time="2025-10-28T23:23:21.207412362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:23:21.207603 containerd[1565]: time="2025-10-28T23:23:21.207443725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:21.207701 kubelet[2718]: E1028 23:23:21.207637 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:23:21.207764 kubelet[2718]: E1028 23:23:21.207693 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:23:21.210223 kubelet[2718]: E1028 23:23:21.207848 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq46n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s5mgj_calico-system(f4d309fd-dc09-4824-8579-3a6f396ee7ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:21.211492 kubelet[2718]: E1028 23:23:21.211347 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s5mgj" podUID="f4d309fd-dc09-4824-8579-3a6f396ee7ce" Oct 28 23:23:21.288285 systemd-networkd[1480]: cali19cb98f5c78: Gained IPv6LL Oct 28 23:23:21.946558 kubelet[2718]: E1028 23:23:21.946512 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:21.946921 kubelet[2718]: E1028 23:23:21.946683 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s5mgj" podUID="f4d309fd-dc09-4824-8579-3a6f396ee7ce" Oct 28 23:23:21.946961 kubelet[2718]: E1028 23:23:21.946936 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b" Oct 28 23:23:22.696272 systemd-networkd[1480]: cali1f1a0476359: Gained IPv6LL Oct 28 23:23:22.772659 kubelet[2718]: E1028 23:23:22.772529 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:22.773035 containerd[1565]: time="2025-10-28T23:23:22.772994343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrbhn,Uid:0976481c-b05f-4bb8-b3b4-26cc6309b8dc,Namespace:kube-system,Attempt:0,}" Oct 28 23:23:22.882849 systemd-networkd[1480]: caliafe343f5a04: Link UP Oct 28 23:23:22.882999 systemd-networkd[1480]: caliafe343f5a04: Gained carrier Oct 28 23:23:22.893639 containerd[1565]: 2025-10-28 23:23:22.818 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0 coredns-674b8bbfcf- kube-system 0976481c-b05f-4bb8-b3b4-26cc6309b8dc 896 0 2025-10-28 23:22:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nrbhn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliafe343f5a04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-" Oct 28 23:23:22.893639 containerd[1565]: 2025-10-28 23:23:22.819 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.893639 containerd[1565]: 2025-10-28 23:23:22.843 [INFO][4534] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" HandleID="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Workload="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.843 [INFO][4534] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" HandleID="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Workload="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dda50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nrbhn", "timestamp":"2025-10-28 23:23:22.843545313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.843 [INFO][4534] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.843 [INFO][4534] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.843 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.852 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" host="localhost" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.857 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.861 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.863 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.865 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:22.894196 containerd[1565]: 2025-10-28 23:23:22.865 [INFO][4534] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" host="localhost" Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.866 [INFO][4534] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504 Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.869 [INFO][4534] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" host="localhost" Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.874 [INFO][4534] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" host="localhost" Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.875 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" host="localhost" Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.875 [INFO][4534] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:22.894457 containerd[1565]: 2025-10-28 23:23:22.875 [INFO][4534] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" HandleID="k8s-pod-network.26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Workload="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.894568 containerd[1565]: 2025-10-28 23:23:22.878 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0976481c-b05f-4bb8-b3b4-26cc6309b8dc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nrbhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafe343f5a04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:22.894808 containerd[1565]: 2025-10-28 23:23:22.878 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.894808 containerd[1565]: 2025-10-28 23:23:22.878 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafe343f5a04 ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.894808 containerd[1565]: 2025-10-28 23:23:22.881 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.894911 containerd[1565]: 2025-10-28 23:23:22.881 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0976481c-b05f-4bb8-b3b4-26cc6309b8dc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504", Pod:"coredns-674b8bbfcf-nrbhn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliafe343f5a04", MAC:"a2:fb:f8:ef:58:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:22.894911 containerd[1565]: 2025-10-28 23:23:22.890 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrbhn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nrbhn-eth0" Oct 28 23:23:22.916768 containerd[1565]: time="2025-10-28T23:23:22.916730538Z" level=info msg="connecting to shim 26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504" address="unix:///run/containerd/s/162b8739dd1cfebf3237065f8698504fec58bdd8916993191ee4af9b981263b1" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:22.943366 systemd[1]: Started cri-containerd-26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504.scope - libcontainer container 26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504. Oct 28 23:23:22.948816 kubelet[2718]: E1028 23:23:22.948715 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s5mgj" podUID="f4d309fd-dc09-4824-8579-3a6f396ee7ce" Oct 28 23:23:22.962807 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:22.989016 containerd[1565]: time="2025-10-28T23:23:22.988976301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrbhn,Uid:0976481c-b05f-4bb8-b3b4-26cc6309b8dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504\"" Oct 28 23:23:22.990197 kubelet[2718]: E1028 23:23:22.990044 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:22.995775 containerd[1565]: time="2025-10-28T23:23:22.995741071Z" level=info msg="CreateContainer within sandbox \"26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:23:23.005463 containerd[1565]: time="2025-10-28T23:23:23.004265354Z" level=info msg="Container 8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:23.012673 containerd[1565]: time="2025-10-28T23:23:23.012629380Z" level=info msg="CreateContainer within sandbox \"26548c1a968fb3c5f8a05ce536f66fa910dd03cc9e0d2415383645a34cd40504\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c\"" Oct 28 23:23:23.013201 containerd[1565]: time="2025-10-28T23:23:23.013177855Z" level=info msg="StartContainer for \"8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c\"" Oct 28 23:23:23.013963 containerd[1565]: time="2025-10-28T23:23:23.013931265Z" level=info msg="connecting to shim 8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c" address="unix:///run/containerd/s/162b8739dd1cfebf3237065f8698504fec58bdd8916993191ee4af9b981263b1" protocol=ttrpc version=3 Oct 28 23:23:23.035344 systemd[1]: Started cri-containerd-8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c.scope - libcontainer container 8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c. Oct 28 23:23:23.064582 containerd[1565]: time="2025-10-28T23:23:23.064522206Z" level=info msg="StartContainer for \"8cb61604e94608921f8fffb1951adb43d1ada6bf2b2df2c5b3e3a0e582f89a4c\" returns successfully" Oct 28 23:23:23.493623 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:51476.service - OpenSSH per-connection server daemon (10.0.0.1:51476). Oct 28 23:23:23.559426 sshd[4633]: Accepted publickey for core from 10.0.0.1 port 51476 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:23.561034 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:23.565087 systemd-logind[1542]: New session 11 of user core. Oct 28 23:23:23.571328 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 28 23:23:23.725369 sshd[4637]: Connection closed by 10.0.0.1 port 51476 Oct 28 23:23:23.725997 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:23.735322 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:51476.service: Deactivated successfully. Oct 28 23:23:23.737022 systemd[1]: session-11.scope: Deactivated successfully. Oct 28 23:23:23.737834 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Oct 28 23:23:23.740597 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:51482.service - OpenSSH per-connection server daemon (10.0.0.1:51482). Oct 28 23:23:23.741100 systemd-logind[1542]: Removed session 11. Oct 28 23:23:23.773423 containerd[1565]: time="2025-10-28T23:23:23.773378032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcfcd784d-qg55p,Uid:c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:23.773955 containerd[1565]: time="2025-10-28T23:23:23.773905066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnlxb,Uid:74371a3d-9f28-4cd9-84c2-dcf19d44a64f,Namespace:calico-system,Attempt:0,}" Oct 28 23:23:23.774084 kubelet[2718]: E1028 23:23:23.774052 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:23.776314 containerd[1565]: time="2025-10-28T23:23:23.776129211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvm7p,Uid:600f9b27-f2ba-4525-b4d0-74dcb8a935ee,Namespace:kube-system,Attempt:0,}" Oct 28 23:23:23.813136 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 51482 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:23.815419 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:23.828392 systemd-logind[1542]: New session 12 of user core. Oct 28 23:23:23.833391 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 28 23:23:23.940912 systemd-networkd[1480]: cali8d4182acb1f: Link UP Oct 28 23:23:23.943025 systemd-networkd[1480]: cali8d4182acb1f: Gained carrier Oct 28 23:23:23.955215 kubelet[2718]: E1028 23:23:23.955176 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.844 [INFO][4655] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0 calico-kube-controllers-5fcfcd784d- calico-system c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2 893 0 2025-10-28 23:22:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fcfcd784d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5fcfcd784d-qg55p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8d4182acb1f [] [] }} ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.844 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.886 [INFO][4705] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" HandleID="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Workload="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.886 [INFO][4705] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" HandleID="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Workload="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5fcfcd784d-qg55p", "timestamp":"2025-10-28 23:23:23.886389288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.887 [INFO][4705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.887 [INFO][4705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.887 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.900 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.909 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.914 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.917 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.920 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.920 [INFO][4705] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.923 [INFO][4705] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9 Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.927 [INFO][4705] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.935 [INFO][4705] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.935 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" host="localhost" Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.935 [INFO][4705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:23.962935 containerd[1565]: 2025-10-28 23:23:23.935 [INFO][4705] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" HandleID="k8s-pod-network.8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Workload="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.937 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0", GenerateName:"calico-kube-controllers-5fcfcd784d-", Namespace:"calico-system", SelfLink:"", UID:"c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcfcd784d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5fcfcd784d-qg55p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d4182acb1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.937 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.938 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d4182acb1f ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.943 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.943 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0", GenerateName:"calico-kube-controllers-5fcfcd784d-", Namespace:"calico-system", SelfLink:"", UID:"c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcfcd784d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9", Pod:"calico-kube-controllers-5fcfcd784d-qg55p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d4182acb1f", MAC:"fa:50:20:e8:76:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:23.963471 containerd[1565]: 2025-10-28 23:23:23.954 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" Namespace="calico-system" Pod="calico-kube-controllers-5fcfcd784d-qg55p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcfcd784d--qg55p-eth0" Oct 28 23:23:23.971185 kubelet[2718]: I1028 23:23:23.971084 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nrbhn" podStartSLOduration=46.971062734 podStartE2EDuration="46.971062734s" podCreationTimestamp="2025-10-28 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:23:23.969996785 +0000 UTC m=+52.309186318" watchObservedRunningTime="2025-10-28 23:23:23.971062734 +0000 UTC m=+52.310252227" Oct 28 23:23:24.012712 containerd[1565]: time="2025-10-28T23:23:24.012673637Z" level=info msg="connecting to shim 8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9" address="unix:///run/containerd/s/a3cef37f89fcfe0a450fb8399e7b1c065182f8fa22bedaa1172cbf0b555c3403" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:24.041314 systemd-networkd[1480]: caliafe343f5a04: Gained IPv6LL Oct 28 23:23:24.046095 systemd-networkd[1480]: calibf7339f5338: Link UP Oct 28 23:23:24.047193 systemd-networkd[1480]: calibf7339f5338: Gained carrier Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.851 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0 coredns-674b8bbfcf- kube-system 600f9b27-f2ba-4525-b4d0-74dcb8a935ee 899 0 2025-10-28 23:22:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dvm7p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf7339f5338 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.851 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.889 [INFO][4712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" HandleID="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Workload="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.890 [INFO][4712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" HandleID="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Workload="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dvm7p", "timestamp":"2025-10-28 23:23:23.889202151 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.890 [INFO][4712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.937 [INFO][4712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:23.937 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.003 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.008 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.014 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.016 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.019 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.019 [INFO][4712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.021 [INFO][4712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4 Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.026 [INFO][4712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.034 [INFO][4712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.034 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" host="localhost" Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.034 [INFO][4712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:24.065368 containerd[1565]: 2025-10-28 23:23:24.034 [INFO][4712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" HandleID="k8s-pod-network.255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Workload="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.040 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"600f9b27-f2ba-4525-b4d0-74dcb8a935ee", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dvm7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf7339f5338", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.040 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.040 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf7339f5338 ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.047 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.047 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"600f9b27-f2ba-4525-b4d0-74dcb8a935ee", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4", Pod:"coredns-674b8bbfcf-dvm7p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf7339f5338", MAC:"36:bc:f6:e6:72:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:24.066058 containerd[1565]: 2025-10-28 23:23:24.060 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvm7p" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvm7p-eth0" Oct 28 23:23:24.073301 systemd[1]: Started cri-containerd-8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9.scope - libcontainer container 8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9. Oct 28 23:23:24.092738 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:24.108768 containerd[1565]: time="2025-10-28T23:23:24.108723837Z" level=info msg="connecting to shim 255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4" address="unix:///run/containerd/s/5dd90a3db19a1eeec17f7cbde5b65d7579287ec3ffe6a1f36a5a0c379a38862b" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:24.117259 sshd[4700]: Connection closed by 10.0.0.1 port 51482 Oct 28 23:23:24.117772 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:24.138085 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:51482.service: Deactivated successfully. Oct 28 23:23:24.142280 systemd[1]: session-12.scope: Deactivated successfully. Oct 28 23:23:24.145529 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Oct 28 23:23:24.151424 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:51488.service - OpenSSH per-connection server daemon (10.0.0.1:51488). Oct 28 23:23:24.155345 systemd-logind[1542]: Removed session 12. Oct 28 23:23:24.198471 systemd[1]: Started cri-containerd-255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4.scope - libcontainer container 255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4. Oct 28 23:23:24.215182 containerd[1565]: time="2025-10-28T23:23:24.215137702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcfcd784d-qg55p,Uid:c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8621ffc205d824fde2f4cc1c598e776ac9918320b3c5219ce82dde590dfd5ba9\"" Oct 28 23:23:24.219237 containerd[1565]: time="2025-10-28T23:23:24.219200002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:23:24.233334 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:24.252267 systemd-networkd[1480]: calic3434f242be: Link UP Oct 28 23:23:24.252851 systemd-networkd[1480]: calic3434f242be: Gained carrier Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:23.854 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hnlxb-eth0 csi-node-driver- calico-system 74371a3d-9f28-4cd9-84c2-dcf19d44a64f 785 0 2025-10-28 23:22:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hnlxb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic3434f242be [] [] }} ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:23.855 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:23.901 [INFO][4722] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" HandleID="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Workload="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:23.901 [INFO][4722] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" HandleID="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Workload="localhost-k8s-csi--node--driver--hnlxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hnlxb", "timestamp":"2025-10-28 23:23:23.901415149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:23.901 [INFO][4722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.035 [INFO][4722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.035 [INFO][4722] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.103 [INFO][4722] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.209 [INFO][4722] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.220 [INFO][4722] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.223 [INFO][4722] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.229 [INFO][4722] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.229 [INFO][4722] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.233 [INFO][4722] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891 Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.236 [INFO][4722] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.244 [INFO][4722] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.246 [INFO][4722] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" host="localhost" Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.246 [INFO][4722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 28 23:23:24.271301 containerd[1565]: 2025-10-28 23:23:24.246 [INFO][4722] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" HandleID="k8s-pod-network.97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Workload="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.250 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hnlxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74371a3d-9f28-4cd9-84c2-dcf19d44a64f", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hnlxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3434f242be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.250 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.250 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3434f242be ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.253 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.253 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hnlxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"74371a3d-9f28-4cd9-84c2-dcf19d44a64f", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.October, 28, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891", Pod:"csi-node-driver-hnlxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic3434f242be", MAC:"da:25:a9:ba:3e:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 28 23:23:24.271785 containerd[1565]: 2025-10-28 23:23:24.266 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" Namespace="calico-system" Pod="csi-node-driver-hnlxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--hnlxb-eth0" Oct 28 23:23:24.281445 containerd[1565]: time="2025-10-28T23:23:24.281299465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvm7p,Uid:600f9b27-f2ba-4525-b4d0-74dcb8a935ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4\"" Oct 28 23:23:24.283359 kubelet[2718]: E1028 23:23:24.282535 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:24.283765 sshd[4836]: Accepted publickey for core from 10.0.0.1 port 51488 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:24.285972 sshd-session[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:24.286979 containerd[1565]: time="2025-10-28T23:23:24.285997246Z" level=info msg="CreateContainer within sandbox \"255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 28 23:23:24.292689 systemd-logind[1542]: New session 13 of user core. Oct 28 23:23:24.299352 containerd[1565]: time="2025-10-28T23:23:24.299015001Z" level=info msg="Container c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857: CDI devices from CRI Config.CDIDevices: []" Oct 28 23:23:24.301286 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 28 23:23:24.321283 containerd[1565]: time="2025-10-28T23:23:24.321240666Z" level=info msg="connecting to shim 97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891" address="unix:///run/containerd/s/700574aa2b25ef8177d1434bb3aa950cf40a68b91b332d20e632dd83d31f77f0" namespace=k8s.io protocol=ttrpc version=3 Oct 28 23:23:24.335356 containerd[1565]: time="2025-10-28T23:23:24.335308209Z" level=info msg="CreateContainer within sandbox \"255bf4cd666f91b5f5b8475885978cff5249762ddc63fdb382fb5102149971a4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857\"" Oct 28 23:23:24.337756 containerd[1565]: time="2025-10-28T23:23:24.337723564Z" level=info msg="StartContainer for \"c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857\"" Oct 28 23:23:24.338750 containerd[1565]: time="2025-10-28T23:23:24.338722348Z" level=info msg="connecting to shim c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857" address="unix:///run/containerd/s/5dd90a3db19a1eeec17f7cbde5b65d7579287ec3ffe6a1f36a5a0c379a38862b" protocol=ttrpc version=3 Oct 28 23:23:24.352295 systemd[1]: Started cri-containerd-97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891.scope - libcontainer container 97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891. Oct 28 23:23:24.371302 systemd[1]: Started cri-containerd-c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857.scope - libcontainer container c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857. Oct 28 23:23:24.378973 systemd-resolved[1276]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 28 23:23:24.400791 containerd[1565]: time="2025-10-28T23:23:24.400735285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hnlxb,Uid:74371a3d-9f28-4cd9-84c2-dcf19d44a64f,Namespace:calico-system,Attempt:0,} returns sandbox id \"97eaaaa09840bfb9aec17b03d7d1bf7d4c9c2238ff055e2495d872ef6b805891\"" Oct 28 23:23:24.411915 containerd[1565]: time="2025-10-28T23:23:24.411881160Z" level=info msg="StartContainer for \"c86b9bb7497c47cfec0a3122460e3b07f674b2420dbc7ab14dac3b4e0f230857\" returns successfully" Oct 28 23:23:24.424392 containerd[1565]: time="2025-10-28T23:23:24.424346799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:24.425115 containerd[1565]: time="2025-10-28T23:23:24.425073686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:23:24.425210 containerd[1565]: time="2025-10-28T23:23:24.425173692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:23:24.425379 kubelet[2718]: E1028 23:23:24.425330 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:23:24.425440 kubelet[2718]: E1028 23:23:24.425385 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:23:24.426007 containerd[1565]: time="2025-10-28T23:23:24.425660443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:23:24.426073 kubelet[2718]: E1028 23:23:24.425750 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxk98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fcfcd784d-qg55p_calico-system(c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:24.426963 kubelet[2718]: E1028 23:23:24.426915 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" podUID="c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2" Oct 28 23:23:24.429265 sshd[4875]: Connection closed by 10.0.0.1 port 51488 Oct 28 23:23:24.429600 sshd-session[4836]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:24.434242 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:51488.service: Deactivated successfully. Oct 28 23:23:24.436727 systemd[1]: session-13.scope: Deactivated successfully. Oct 28 23:23:24.439826 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Oct 28 23:23:24.442028 systemd-logind[1542]: Removed session 13. Oct 28 23:23:24.639252 containerd[1565]: time="2025-10-28T23:23:24.639053489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:24.666131 containerd[1565]: time="2025-10-28T23:23:24.666067341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:23:24.666222 containerd[1565]: time="2025-10-28T23:23:24.666161867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:23:24.666438 kubelet[2718]: E1028 23:23:24.666297 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:23:24.666438 kubelet[2718]: E1028 23:23:24.666344 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:23:24.666525 kubelet[2718]: E1028 23:23:24.666485 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pc66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:24.669886 containerd[1565]: time="2025-10-28T23:23:24.669808101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:23:24.881386 containerd[1565]: time="2025-10-28T23:23:24.881324906Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:24.882740 containerd[1565]: time="2025-10-28T23:23:24.882685994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:23:24.882818 containerd[1565]: time="2025-10-28T23:23:24.882767919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:23:24.882944 kubelet[2718]: E1028 23:23:24.882910 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:23:24.882990 kubelet[2718]: E1028 23:23:24.882958 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:23:24.883114 kubelet[2718]: E1028 23:23:24.883075 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pc66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:24.884379 kubelet[2718]: E1028 23:23:24.884294 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:24.959816 kubelet[2718]: E1028 23:23:24.959595 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:24.963100 kubelet[2718]: E1028 23:23:24.963064 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" podUID="c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2" Oct 28 23:23:24.963743 kubelet[2718]: E1028 23:23:24.963717 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:24.965389 kubelet[2718]: E1028 23:23:24.965349 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:24.985820 kubelet[2718]: I1028 23:23:24.985759 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvm7p" podStartSLOduration=47.985745243 podStartE2EDuration="47.985745243s" podCreationTimestamp="2025-10-28 23:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-28 23:23:24.972526876 +0000 UTC m=+53.311716409" watchObservedRunningTime="2025-10-28 23:23:24.985745243 +0000 UTC m=+53.324934776" Oct 28 23:23:25.512289 systemd-networkd[1480]: calibf7339f5338: Gained IPv6LL Oct 28 23:23:25.576362 systemd-networkd[1480]: calic3434f242be: Gained IPv6LL Oct 28 23:23:25.960639 systemd-networkd[1480]: cali8d4182acb1f: Gained IPv6LL Oct 28 23:23:25.966177 kubelet[2718]: E1028 23:23:25.965745 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:25.966177 kubelet[2718]: E1028 23:23:25.966052 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" podUID="c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2" Oct 28 23:23:25.966541 kubelet[2718]: E1028 23:23:25.966262 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:25.967582 kubelet[2718]: E1028 23:23:25.967521 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:26.967739 kubelet[2718]: E1028 23:23:26.967654 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:29.451787 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:36338.service - OpenSSH per-connection server daemon (10.0.0.1:36338). Oct 28 23:23:29.499815 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 36338 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:29.501293 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:29.505213 systemd-logind[1542]: New session 14 of user core. Oct 28 23:23:29.514254 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 28 23:23:29.602735 sshd[4984]: Connection closed by 10.0.0.1 port 36338 Oct 28 23:23:29.602764 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:29.612418 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:36338.service: Deactivated successfully. Oct 28 23:23:29.613914 systemd[1]: session-14.scope: Deactivated successfully. Oct 28 23:23:29.615170 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Oct 28 23:23:29.617014 systemd-logind[1542]: Removed session 14. Oct 28 23:23:29.618525 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Oct 28 23:23:29.685269 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:29.686873 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:29.691193 systemd-logind[1542]: New session 15 of user core. Oct 28 23:23:29.702278 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 28 23:23:29.871855 sshd[5002]: Connection closed by 10.0.0.1 port 36340 Oct 28 23:23:29.872376 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:29.885320 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:36340.service: Deactivated successfully. Oct 28 23:23:29.886993 systemd[1]: session-15.scope: Deactivated successfully. Oct 28 23:23:29.887697 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Oct 28 23:23:29.889929 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:36352.service - OpenSSH per-connection server daemon (10.0.0.1:36352). Oct 28 23:23:29.890508 systemd-logind[1542]: Removed session 15. Oct 28 23:23:29.951485 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 36352 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:29.953006 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:29.956990 systemd-logind[1542]: New session 16 of user core. Oct 28 23:23:29.967271 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 28 23:23:30.562390 sshd[5017]: Connection closed by 10.0.0.1 port 36352 Oct 28 23:23:30.562974 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:30.573678 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:36352.service: Deactivated successfully. Oct 28 23:23:30.575672 systemd[1]: session-16.scope: Deactivated successfully. Oct 28 23:23:30.577753 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Oct 28 23:23:30.583059 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:36368.service - OpenSSH per-connection server daemon (10.0.0.1:36368). Oct 28 23:23:30.585319 systemd-logind[1542]: Removed session 16. Oct 28 23:23:30.652597 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 36368 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:30.654332 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:30.659064 systemd-logind[1542]: New session 17 of user core. Oct 28 23:23:30.671302 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 28 23:23:30.774246 containerd[1565]: time="2025-10-28T23:23:30.774147002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 28 23:23:30.963535 sshd[5044]: Connection closed by 10.0.0.1 port 36368 Oct 28 23:23:30.964187 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:30.979915 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:36368.service: Deactivated successfully. Oct 28 23:23:30.982952 systemd[1]: session-17.scope: Deactivated successfully. Oct 28 23:23:30.984400 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Oct 28 23:23:30.985013 containerd[1565]: time="2025-10-28T23:23:30.984980092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:30.986131 containerd[1565]: time="2025-10-28T23:23:30.986089757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 28 23:23:30.986213 containerd[1565]: time="2025-10-28T23:23:30.986175922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 28 23:23:30.986480 kubelet[2718]: E1028 23:23:30.986448 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:23:30.986755 kubelet[2718]: E1028 23:23:30.986492 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 28 23:23:30.986755 kubelet[2718]: E1028 23:23:30.986601 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a00c4191deae4866a22f5a545e62360f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v85hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555d9c847b-pmc7r_calico-system(6090c7c2-848a-41cc-9996-0be408373b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:30.988479 containerd[1565]: time="2025-10-28T23:23:30.988373491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 28 23:23:30.989874 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:36382.service - OpenSSH per-connection server daemon (10.0.0.1:36382). Oct 28 23:23:30.991262 systemd-logind[1542]: Removed session 17. Oct 28 23:23:31.053491 sshd[5055]: Accepted publickey for core from 10.0.0.1 port 36382 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:31.055194 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:31.059336 systemd-logind[1542]: New session 18 of user core. Oct 28 23:23:31.075306 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 28 23:23:31.158542 sshd[5059]: Connection closed by 10.0.0.1 port 36382 Oct 28 23:23:31.159076 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:31.163170 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:36382.service: Deactivated successfully. Oct 28 23:23:31.164987 systemd[1]: session-18.scope: Deactivated successfully. Oct 28 23:23:31.167260 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Oct 28 23:23:31.168428 systemd-logind[1542]: Removed session 18. Oct 28 23:23:31.177877 containerd[1565]: time="2025-10-28T23:23:31.177832239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:31.178754 containerd[1565]: time="2025-10-28T23:23:31.178719291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 28 23:23:31.179143 containerd[1565]: time="2025-10-28T23:23:31.178797975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 28 23:23:31.179211 kubelet[2718]: E1028 23:23:31.178929 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:23:31.179211 kubelet[2718]: E1028 23:23:31.178977 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 28 23:23:31.179306 kubelet[2718]: E1028 23:23:31.179092 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v85hk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-555d9c847b-pmc7r_calico-system(6090c7c2-848a-41cc-9996-0be408373b53): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:31.180608 kubelet[2718]: E1028 23:23:31.180553 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555d9c847b-pmc7r" podUID="6090c7c2-848a-41cc-9996-0be408373b53" Oct 28 23:23:33.772807 containerd[1565]: time="2025-10-28T23:23:33.772711078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:23:34.007834 containerd[1565]: time="2025-10-28T23:23:34.007732584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:34.008627 containerd[1565]: time="2025-10-28T23:23:34.008595313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:23:34.008757 containerd[1565]: time="2025-10-28T23:23:34.008669957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:34.008808 kubelet[2718]: E1028 23:23:34.008768 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:34.009081 kubelet[2718]: E1028 23:23:34.008817 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:34.009081 kubelet[2718]: E1028 23:23:34.008966 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbj6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bd9c664b8-n79zh_calico-apiserver(c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:34.010230 kubelet[2718]: E1028 23:23:34.010181 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:36.179964 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:36396.service - OpenSSH per-connection server daemon (10.0.0.1:36396). Oct 28 23:23:36.240207 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 36396 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:36.241742 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:36.246208 systemd-logind[1542]: New session 19 of user core. Oct 28 23:23:36.253320 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 28 23:23:36.358038 sshd[5089]: Connection closed by 10.0.0.1 port 36396 Oct 28 23:23:36.358407 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:36.362788 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:36396.service: Deactivated successfully. Oct 28 23:23:36.366653 systemd[1]: session-19.scope: Deactivated successfully. Oct 28 23:23:36.369153 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Oct 28 23:23:36.370854 systemd-logind[1542]: Removed session 19. Oct 28 23:23:36.773479 containerd[1565]: time="2025-10-28T23:23:36.773424346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 28 23:23:36.975272 containerd[1565]: time="2025-10-28T23:23:36.975230397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:36.976129 containerd[1565]: time="2025-10-28T23:23:36.976086724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 28 23:23:36.976195 containerd[1565]: time="2025-10-28T23:23:36.976151128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:36.976349 kubelet[2718]: E1028 23:23:36.976313 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:36.976649 kubelet[2718]: E1028 23:23:36.976424 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 28 23:23:36.976649 kubelet[2718]: E1028 23:23:36.976583 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjcf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-bd9c664b8-7445h_calico-apiserver(231f92ee-a43d-4793-84d1-fee8282bd19b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:36.977934 kubelet[2718]: E1028 23:23:36.977875 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b" Oct 28 23:23:37.772684 containerd[1565]: time="2025-10-28T23:23:37.772641645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 28 23:23:38.083753 containerd[1565]: time="2025-10-28T23:23:38.083558343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:38.097205 containerd[1565]: time="2025-10-28T23:23:38.097156278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 28 23:23:38.097286 containerd[1565]: time="2025-10-28T23:23:38.097221561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 28 23:23:38.097437 kubelet[2718]: E1028 23:23:38.097358 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:23:38.097437 kubelet[2718]: E1028 23:23:38.097434 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 28 23:23:38.097714 kubelet[2718]: E1028 23:23:38.097656 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq46n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s5mgj_calico-system(f4d309fd-dc09-4824-8579-3a6f396ee7ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:38.097820 containerd[1565]: time="2025-10-28T23:23:38.097716108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 28 23:23:38.099271 kubelet[2718]: E1028 23:23:38.099211 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s5mgj" podUID="f4d309fd-dc09-4824-8579-3a6f396ee7ce" Oct 28 23:23:38.338783 containerd[1565]: time="2025-10-28T23:23:38.338658361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:38.339977 containerd[1565]: time="2025-10-28T23:23:38.339920069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 28 23:23:38.340040 containerd[1565]: time="2025-10-28T23:23:38.339993713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 28 23:23:38.340205 kubelet[2718]: E1028 23:23:38.340157 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:23:38.340205 kubelet[2718]: E1028 23:23:38.340206 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 28 23:23:38.340431 kubelet[2718]: E1028 23:23:38.340338 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxk98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fcfcd784d-qg55p_calico-system(c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:38.341678 kubelet[2718]: E1028 23:23:38.341549 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fcfcd784d-qg55p" podUID="c8b3ca1f-9314-4a0e-9dc4-82d040dc18d2" Oct 28 23:23:40.772628 containerd[1565]: time="2025-10-28T23:23:40.772587952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 28 23:23:40.994217 containerd[1565]: time="2025-10-28T23:23:40.994167339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:40.995217 containerd[1565]: time="2025-10-28T23:23:40.995141732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 28 23:23:40.995271 containerd[1565]: time="2025-10-28T23:23:40.995220568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 28 23:23:40.997296 kubelet[2718]: E1028 23:23:40.995500 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:23:40.997296 kubelet[2718]: E1028 23:23:40.995551 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 28 23:23:40.997296 kubelet[2718]: E1028 23:23:40.995733 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pc66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:40.997683 containerd[1565]: time="2025-10-28T23:23:40.997600251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 28 23:23:41.217894 containerd[1565]: time="2025-10-28T23:23:41.217754808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 28 23:23:41.218999 containerd[1565]: time="2025-10-28T23:23:41.218963312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 28 23:23:41.218999 containerd[1565]: time="2025-10-28T23:23:41.219030469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 28 23:23:41.219276 kubelet[2718]: E1028 23:23:41.219240 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:23:41.219336 kubelet[2718]: E1028 23:23:41.219295 2718 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 28 23:23:41.219445 kubelet[2718]: E1028 23:23:41.219405 2718 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pc66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hnlxb_calico-system(74371a3d-9f28-4cd9-84c2-dcf19d44a64f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 28 23:23:41.220960 kubelet[2718]: E1028 23:23:41.220904 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hnlxb" podUID="74371a3d-9f28-4cd9-84c2-dcf19d44a64f" Oct 28 23:23:41.370661 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:36600.service - OpenSSH per-connection server daemon (10.0.0.1:36600). Oct 28 23:23:41.418934 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 36600 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:41.420911 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:41.425241 systemd-logind[1542]: New session 20 of user core. Oct 28 23:23:41.444330 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 28 23:23:41.541175 sshd[5109]: Connection closed by 10.0.0.1 port 36600 Oct 28 23:23:41.542400 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:41.546791 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:36600.service: Deactivated successfully. Oct 28 23:23:41.548405 systemd[1]: session-20.scope: Deactivated successfully. Oct 28 23:23:41.549107 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Oct 28 23:23:41.550285 systemd-logind[1542]: Removed session 20. Oct 28 23:23:44.774114 kubelet[2718]: E1028 23:23:44.774055 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-n79zh" podUID="c39ad0d2-f1ab-413c-a4c2-0f81171d2cd7" Oct 28 23:23:45.000800 containerd[1565]: time="2025-10-28T23:23:45.000759105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d043b41faa5dc4d12dab6353832f24f435d1016cc1e07100eebfe7a023db035\" id:\"92cfc982fb01b0db241351573ac656d66bc9a52383b1a72bd622287c956c7bef\" pid:5135 exited_at:{seconds:1761693825 nanos:258164}" Oct 28 23:23:45.006892 kubelet[2718]: E1028 23:23:45.006846 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:45.772140 kubelet[2718]: E1028 23:23:45.771830 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 28 23:23:45.774601 kubelet[2718]: E1028 23:23:45.774565 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-555d9c847b-pmc7r" podUID="6090c7c2-848a-41cc-9996-0be408373b53" Oct 28 23:23:46.557290 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:36616.service - OpenSSH per-connection server daemon (10.0.0.1:36616). Oct 28 23:23:46.630286 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 36616 ssh2: RSA SHA256:OtbCm0nzVLEbk75LFoPpO8eCDdDNl8BdfCvOYDKrEdg Oct 28 23:23:46.632489 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 28 23:23:46.637991 systemd-logind[1542]: New session 21 of user core. Oct 28 23:23:46.649334 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 28 23:23:46.743216 sshd[5152]: Connection closed by 10.0.0.1 port 36616 Oct 28 23:23:46.743781 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Oct 28 23:23:46.748813 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:36616.service: Deactivated successfully. Oct 28 23:23:46.751166 systemd[1]: session-21.scope: Deactivated successfully. Oct 28 23:23:46.752034 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Oct 28 23:23:46.753708 systemd-logind[1542]: Removed session 21. Oct 28 23:23:47.772942 kubelet[2718]: E1028 23:23:47.772854 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bd9c664b8-7445h" podUID="231f92ee-a43d-4793-84d1-fee8282bd19b"