Sep 8 23:56:41.871910 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:56:41.871932 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Sep 8 22:15:05 -00 2025 Sep 8 23:56:41.871943 kernel: KASLR enabled Sep 8 23:56:41.871949 kernel: efi: EFI v2.7 by EDK II Sep 8 23:56:41.871954 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 8 23:56:41.871960 kernel: random: crng init done Sep 8 23:56:41.871967 kernel: secureboot: Secure boot disabled Sep 8 23:56:41.871973 kernel: ACPI: Early table checksum verification disabled Sep 8 23:56:41.871979 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 8 23:56:41.871987 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:56:41.871994 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872000 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872011 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872017 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872025 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872032 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872039 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872045 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872051 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:56:41.872058 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:56:41.872064 kernel: NUMA: Failed to initialise from firmware Sep 8 23:56:41.872071 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:56:41.872077 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 8 23:56:41.872083 kernel: Zone ranges: Sep 8 23:56:41.872099 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:56:41.872108 kernel: DMA32 empty Sep 8 23:56:41.872114 kernel: Normal empty Sep 8 23:56:41.872120 kernel: Movable zone start for each node Sep 8 23:56:41.872126 kernel: Early memory node ranges Sep 8 23:56:41.872132 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 8 23:56:41.872139 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 8 23:56:41.872145 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 8 23:56:41.872151 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 8 23:56:41.872158 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 8 23:56:41.872164 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:56:41.872170 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:56:41.872176 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:56:41.872184 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:56:41.872190 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:56:41.872196 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:56:41.872206 kernel: psci: probing for conduit method from ACPI. Sep 8 23:56:41.872213 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:56:41.872219 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:56:41.872227 kernel: psci: Trusted OS migration not required Sep 8 23:56:41.872238 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:56:41.872244 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:56:41.872252 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 8 23:56:41.872259 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 8 23:56:41.872266 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:56:41.872273 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:56:41.872308 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:56:41.872318 kernel: CPU features: detected: Hardware dirty bit management Sep 8 23:56:41.872328 kernel: CPU features: detected: Spectre-v4 Sep 8 23:56:41.872336 kernel: CPU features: detected: Spectre-BHB Sep 8 23:56:41.872354 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:56:41.872367 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:56:41.872374 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:56:41.872382 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:56:41.872388 kernel: alternatives: applying boot alternatives Sep 8 23:56:41.872396 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:56:41.872403 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:56:41.872410 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:56:41.872417 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:56:41.872423 kernel: Fallback order for Node 0: 0 Sep 8 23:56:41.872432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 8 23:56:41.872438 kernel: Policy zone: DMA Sep 8 23:56:41.872445 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:56:41.872454 kernel: software IO TLB: area num 4. Sep 8 23:56:41.872461 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 8 23:56:41.872468 kernel: Memory: 2387408K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184880K reserved, 0K cma-reserved) Sep 8 23:56:41.872475 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:56:41.872482 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:56:41.872489 kernel: rcu: RCU event tracing is enabled. Sep 8 23:56:41.872496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:56:41.872503 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:56:41.872510 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:56:41.872519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:56:41.872525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:56:41.872532 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:56:41.872539 kernel: GICv3: 256 SPIs implemented Sep 8 23:56:41.872545 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:56:41.872552 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:56:41.872560 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:56:41.872569 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:56:41.872576 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:56:41.872583 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:56:41.872590 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:56:41.872598 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 8 23:56:41.872605 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 8 23:56:41.872612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:56:41.872618 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:56:41.872625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:56:41.872638 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:56:41.872646 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:56:41.872653 kernel: arm-pv: using stolen time PV Sep 8 23:56:41.872660 kernel: Console: colour dummy device 80x25 Sep 8 23:56:41.872666 kernel: ACPI: Core revision 20230628 Sep 8 23:56:41.872673 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:56:41.872682 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:56:41.872689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:56:41.872696 kernel: landlock: Up and running. Sep 8 23:56:41.872702 kernel: SELinux: Initializing. Sep 8 23:56:41.872710 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:56:41.872716 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:56:41.872723 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:56:41.872730 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:56:41.872737 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:56:41.872745 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:56:41.872752 kernel: Platform MSI: ITS@0x8080000 domain created Sep 8 23:56:41.872759 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 8 23:56:41.872766 kernel: Remapping and enabling EFI services. Sep 8 23:56:41.872776 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:56:41.872782 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:56:41.872789 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:56:41.872796 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 8 23:56:41.872803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:56:41.872812 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:56:41.872819 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:56:41.872831 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:56:41.872840 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 8 23:56:41.872847 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:56:41.872855 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:56:41.872862 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:56:41.872869 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:56:41.872876 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 8 23:56:41.872885 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:56:41.872892 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:56:41.872899 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:56:41.872906 kernel: SMP: Total of 4 processors activated. Sep 8 23:56:41.872913 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:56:41.872921 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:56:41.872928 kernel: CPU features: detected: Common not Private translations Sep 8 23:56:41.872937 kernel: CPU features: detected: CRC32 instructions Sep 8 23:56:41.872946 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:56:41.872953 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:56:41.872960 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:56:41.872967 kernel: CPU features: detected: Privileged Access Never Sep 8 23:56:41.872975 kernel: CPU features: detected: RAS Extension Support Sep 8 23:56:41.872982 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:56:41.872989 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:56:41.872996 kernel: alternatives: applying system-wide alternatives Sep 8 23:56:41.873003 kernel: devtmpfs: initialized Sep 8 23:56:41.873011 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:56:41.873020 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:56:41.873027 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:56:41.873034 kernel: SMBIOS 3.0.0 present. Sep 8 23:56:41.873041 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:56:41.873048 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:56:41.873056 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:56:41.873065 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:56:41.873073 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:56:41.873081 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:56:41.873155 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 8 23:56:41.873164 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:56:41.873171 kernel: cpuidle: using governor menu Sep 8 23:56:41.873178 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:56:41.873186 kernel: ASID allocator initialised with 32768 entries Sep 8 23:56:41.873193 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:56:41.873201 kernel: Serial: AMBA PL011 UART driver Sep 8 23:56:41.873208 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:56:41.873218 kernel: Modules: 0 pages in range for non-PLT usage Sep 8 23:56:41.873225 kernel: Modules: 509248 pages in range for PLT usage Sep 8 23:56:41.873232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:56:41.873239 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:56:41.873247 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:56:41.873254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:56:41.873261 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:56:41.873269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:56:41.873276 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:56:41.873285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:56:41.873292 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:56:41.873304 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:56:41.873312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:56:41.873319 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:56:41.873327 kernel: ACPI: Interpreter enabled Sep 8 23:56:41.873334 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:56:41.873341 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:56:41.873348 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:56:41.873355 kernel: printk: console [ttyAMA0] enabled Sep 8 23:56:41.873368 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:56:41.873530 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:56:41.873613 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:56:41.873696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:56:41.873774 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:56:41.873842 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:56:41.873852 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:56:41.873863 kernel: PCI host bridge to bus 0000:00 Sep 8 23:56:41.873939 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:56:41.874003 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:56:41.874064 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:56:41.874138 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:56:41.874226 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 8 23:56:41.874312 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:56:41.874399 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 8 23:56:41.874483 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 8 23:56:41.874562 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:56:41.874641 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:56:41.874715 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 8 23:56:41.874783 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 8 23:56:41.874852 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:56:41.874912 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:56:41.874979 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:56:41.874990 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:56:41.875000 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:56:41.875010 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:56:41.875018 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:56:41.875025 kernel: iommu: Default domain type: Translated Sep 8 23:56:41.875034 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:56:41.875042 kernel: efivars: Registered efivars operations Sep 8 23:56:41.875049 kernel: vgaarb: loaded Sep 8 23:56:41.875056 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:56:41.875063 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:56:41.875071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:56:41.875078 kernel: pnp: PnP ACPI init Sep 8 23:56:41.875170 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:56:41.875181 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:56:41.875192 kernel: NET: Registered PF_INET protocol family Sep 8 23:56:41.875205 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:56:41.875213 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:56:41.875220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:56:41.875228 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:56:41.875235 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:56:41.875243 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:56:41.875250 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:56:41.875259 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:56:41.875266 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:56:41.875274 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:56:41.875281 kernel: kvm [1]: HYP mode not available Sep 8 23:56:41.875288 kernel: Initialise system trusted keyrings Sep 8 23:56:41.875296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:56:41.875303 kernel: Key type asymmetric registered Sep 8 23:56:41.875310 kernel: Asymmetric key parser 'x509' registered Sep 8 23:56:41.875317 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 8 23:56:41.875325 kernel: io scheduler mq-deadline registered Sep 8 23:56:41.875334 kernel: io scheduler kyber registered Sep 8 23:56:41.875341 kernel: io scheduler bfq registered Sep 8 23:56:41.875349 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:56:41.875356 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:56:41.875365 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:56:41.875444 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:56:41.875454 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:56:41.875462 kernel: thunder_xcv, ver 1.0 Sep 8 23:56:41.875469 kernel: thunder_bgx, ver 1.0 Sep 8 23:56:41.875479 kernel: nicpf, ver 1.0 Sep 8 23:56:41.875486 kernel: nicvf, ver 1.0 Sep 8 23:56:41.875566 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:56:41.875639 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:56:41 UTC (1757375801) Sep 8 23:56:41.875650 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:56:41.875658 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 8 23:56:41.875665 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 8 23:56:41.875673 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:56:41.875683 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:56:41.875690 kernel: Segment Routing with IPv6 Sep 8 23:56:41.875697 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:56:41.875705 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:56:41.875712 kernel: Key type dns_resolver registered Sep 8 23:56:41.875720 kernel: registered taskstats version 1 Sep 8 23:56:41.875727 kernel: Loading compiled-in X.509 certificates Sep 8 23:56:41.875734 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 98feb45e0c7a714eab78dfe8a165eb91758e42e9' Sep 8 23:56:41.875741 kernel: Key type .fscrypt registered Sep 8 23:56:41.875750 kernel: Key type fscrypt-provisioning registered Sep 8 23:56:41.875758 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:56:41.875765 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:56:41.875772 kernel: ima: No architecture policies found Sep 8 23:56:41.875779 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:56:41.875787 kernel: clk: Disabling unused clocks Sep 8 23:56:41.875794 kernel: Freeing unused kernel memory: 38400K Sep 8 23:56:41.875801 kernel: Run /init as init process Sep 8 23:56:41.875808 kernel: with arguments: Sep 8 23:56:41.875817 kernel: /init Sep 8 23:56:41.875824 kernel: with environment: Sep 8 23:56:41.875831 kernel: HOME=/ Sep 8 23:56:41.875838 kernel: TERM=linux Sep 8 23:56:41.875846 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:56:41.875854 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:56:41.875865 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:56:41.875874 systemd[1]: Detected virtualization kvm. Sep 8 23:56:41.875882 systemd[1]: Detected architecture arm64. Sep 8 23:56:41.875889 systemd[1]: Running in initrd. Sep 8 23:56:41.875897 systemd[1]: No hostname configured, using default hostname. Sep 8 23:56:41.875905 systemd[1]: Hostname set to . Sep 8 23:56:41.875913 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:56:41.875921 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:56:41.875929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:56:41.875938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:56:41.875948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:56:41.875956 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:56:41.875964 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:56:41.875973 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:56:41.875982 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:56:41.875991 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:56:41.876000 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:56:41.876008 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:56:41.876016 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:56:41.876028 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:56:41.876036 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:56:41.876044 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:56:41.876052 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:56:41.876060 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:56:41.876068 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:56:41.876077 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:56:41.876086 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:56:41.876113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:56:41.876122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:56:41.876135 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:56:41.876144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:56:41.876152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:56:41.876160 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:56:41.876171 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:56:41.876179 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:56:41.876187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:56:41.876195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:41.876203 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:56:41.876211 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:56:41.876221 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:56:41.876254 systemd-journald[237]: Collecting audit messages is disabled. Sep 8 23:56:41.876275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:56:41.876285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:56:41.876294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:41.876301 kernel: Bridge firewalling registered Sep 8 23:56:41.876309 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:41.876318 systemd-journald[237]: Journal started Sep 8 23:56:41.876337 systemd-journald[237]: Runtime Journal (/run/log/journal/9582bf4286014e7db487614be2143cc3) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:56:41.858725 systemd-modules-load[238]: Inserted module 'overlay' Sep 8 23:56:41.878979 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:56:41.874957 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 8 23:56:41.880903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:56:41.882343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:56:41.886490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:56:41.888072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:56:41.890250 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:56:41.898972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:56:41.901568 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:41.902780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:56:41.904622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:56:41.917339 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:56:41.919514 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:56:41.929772 dracut-cmdline[277]: dracut-dracut-053 Sep 8 23:56:41.932609 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:56:41.949547 systemd-resolved[280]: Positive Trust Anchors: Sep 8 23:56:41.949568 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:56:41.949605 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:56:41.954698 systemd-resolved[280]: Defaulting to hostname 'linux'. Sep 8 23:56:41.955763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:56:41.957702 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:56:42.004121 kernel: SCSI subsystem initialized Sep 8 23:56:42.009112 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:56:42.017140 kernel: iscsi: registered transport (tcp) Sep 8 23:56:42.031245 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:56:42.031291 kernel: QLogic iSCSI HBA Driver Sep 8 23:56:42.077152 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:56:42.088258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:56:42.105926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:56:42.105991 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:56:42.106014 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:56:42.152142 kernel: raid6: neonx8 gen() 15562 MB/s Sep 8 23:56:42.169134 kernel: raid6: neonx4 gen() 15309 MB/s Sep 8 23:56:42.186131 kernel: raid6: neonx2 gen() 12972 MB/s Sep 8 23:56:42.203130 kernel: raid6: neonx1 gen() 10397 MB/s Sep 8 23:56:42.220134 kernel: raid6: int64x8 gen() 6687 MB/s Sep 8 23:56:42.237121 kernel: raid6: int64x4 gen() 7259 MB/s Sep 8 23:56:42.254136 kernel: raid6: int64x2 gen() 6016 MB/s Sep 8 23:56:42.271185 kernel: raid6: int64x1 gen() 4876 MB/s Sep 8 23:56:42.271236 kernel: raid6: using algorithm neonx8 gen() 15562 MB/s Sep 8 23:56:42.289152 kernel: raid6: .... xor() 11768 MB/s, rmw enabled Sep 8 23:56:42.289231 kernel: raid6: using neon recovery algorithm Sep 8 23:56:42.295316 kernel: xor: measuring software checksum speed Sep 8 23:56:42.295383 kernel: 8regs : 21636 MB/sec Sep 8 23:56:42.296385 kernel: 32regs : 21681 MB/sec Sep 8 23:56:42.296408 kernel: arm64_neon : 25528 MB/sec Sep 8 23:56:42.296428 kernel: xor: using function: arm64_neon (25528 MB/sec) Sep 8 23:56:42.345128 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:56:42.359506 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:56:42.374426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:56:42.389700 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 8 23:56:42.393827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:56:42.407378 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:56:42.422517 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 8 23:56:42.463038 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:56:42.474293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:56:42.533445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:56:42.550704 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:56:42.564085 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:56:42.567202 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:56:42.570072 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:56:42.571388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:56:42.581341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:56:42.593689 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:56:42.609126 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:56:42.612310 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:56:42.617492 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:56:42.622151 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:56:42.622233 kernel: GPT:9289727 != 19775487 Sep 8 23:56:42.617619 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:42.621948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:42.629423 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:56:42.629449 kernel: GPT:9289727 != 19775487 Sep 8 23:56:42.629461 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:56:42.629471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:42.622981 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:56:42.623157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:42.629384 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:42.635513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:42.652126 kernel: BTRFS: device fsid 75950a77-34ea-4c25-8b07-0ac9de89ed80 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (529) Sep 8 23:56:42.651537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:42.657219 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (514) Sep 8 23:56:42.665856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:56:42.678424 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:56:42.685124 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:56:42.686403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:56:42.695516 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:56:42.708300 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:56:42.710361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:56:42.715902 disk-uuid[556]: Primary Header is updated. Sep 8 23:56:42.715902 disk-uuid[556]: Secondary Entries is updated. Sep 8 23:56:42.715902 disk-uuid[556]: Secondary Header is updated. Sep 8 23:56:42.723135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:42.735252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:43.734136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:56:43.734321 disk-uuid[557]: The operation has completed successfully. Sep 8 23:56:43.761491 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:56:43.762618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:56:43.804694 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:56:43.808046 sh[576]: Success Sep 8 23:56:43.818467 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 8 23:56:43.848866 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:56:43.871688 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:56:43.873958 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:56:43.884663 kernel: BTRFS info (device dm-0): first mount of filesystem 75950a77-34ea-4c25-8b07-0ac9de89ed80 Sep 8 23:56:43.884712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:56:43.884723 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:56:43.886142 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:56:43.886160 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:56:43.890337 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:56:43.891509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:56:43.892334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:56:43.894497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:56:43.909912 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:56:43.909971 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:56:43.909982 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:43.912387 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:43.916129 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:56:43.920927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:56:43.928327 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:56:43.992941 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:56:44.000851 ignition[664]: Ignition 2.20.0 Sep 8 23:56:44.000865 ignition[664]: Stage: fetch-offline Sep 8 23:56:44.006300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:56:44.000902 ignition[664]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:44.000911 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:44.001065 ignition[664]: parsed url from cmdline: "" Sep 8 23:56:44.001069 ignition[664]: no config URL provided Sep 8 23:56:44.001074 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:56:44.001081 ignition[664]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:56:44.001126 ignition[664]: op(1): [started] loading QEMU firmware config module Sep 8 23:56:44.001131 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:56:44.007837 ignition[664]: op(1): [finished] loading QEMU firmware config module Sep 8 23:56:44.033704 systemd-networkd[767]: lo: Link UP Sep 8 23:56:44.033718 systemd-networkd[767]: lo: Gained carrier Sep 8 23:56:44.034630 systemd-networkd[767]: Enumeration completed Sep 8 23:56:44.034905 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:56:44.037613 ignition[664]: parsing config with SHA512: 004f148ede090dcd7093127eb6471e184629004dc7841318ad2f5a55403174ccee35ecc02de0ad715f49a628d11d5c98d55a51a0f752a00b2acce000fa6ac8fb Sep 8 23:56:44.035109 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:44.035113 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:56:44.035859 systemd-networkd[767]: eth0: Link UP Sep 8 23:56:44.035862 systemd-networkd[767]: eth0: Gained carrier Sep 8 23:56:44.035869 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:44.043033 ignition[664]: fetch-offline: fetch-offline passed Sep 8 23:56:44.037373 systemd[1]: Reached target network.target - Network. Sep 8 23:56:44.043147 ignition[664]: Ignition finished successfully Sep 8 23:56:44.042597 unknown[664]: fetched base config from "system" Sep 8 23:56:44.042604 unknown[664]: fetched user config from "qemu" Sep 8 23:56:44.044775 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:56:44.048004 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:56:44.058139 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:56:44.058286 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:56:44.073804 ignition[772]: Ignition 2.20.0 Sep 8 23:56:44.073815 ignition[772]: Stage: kargs Sep 8 23:56:44.073976 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:44.073984 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:44.075197 ignition[772]: kargs: kargs passed Sep 8 23:56:44.075311 ignition[772]: Ignition finished successfully Sep 8 23:56:44.078581 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:56:44.093308 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:56:44.103038 ignition[781]: Ignition 2.20.0 Sep 8 23:56:44.103047 ignition[781]: Stage: disks Sep 8 23:56:44.103247 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:44.103257 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:44.104273 ignition[781]: disks: disks passed Sep 8 23:56:44.105758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:56:44.104319 ignition[781]: Ignition finished successfully Sep 8 23:56:44.106871 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:56:44.107965 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:56:44.109611 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:56:44.111044 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:56:44.112829 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:56:44.127259 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:56:44.137315 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:56:44.141698 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:56:44.156246 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:56:44.198127 kernel: EXT4-fs (vda9): mounted filesystem 3b93848a-00fd-42cd-b996-7bf357d8ae77 r/w with ordered data mode. Quota mode: none. Sep 8 23:56:44.198819 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:56:44.199945 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:56:44.210216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:56:44.212297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:56:44.213075 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:56:44.213137 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:56:44.213163 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:56:44.218546 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:56:44.220295 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:56:44.226183 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (800) Sep 8 23:56:44.226224 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:56:44.226235 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:56:44.227318 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:44.230144 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:44.231348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:56:44.257732 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:56:44.262037 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:56:44.266110 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:56:44.270163 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:56:44.340318 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:56:44.349236 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:56:44.351673 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:56:44.356110 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:56:44.371773 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:56:44.375293 ignition[913]: INFO : Ignition 2.20.0 Sep 8 23:56:44.375293 ignition[913]: INFO : Stage: mount Sep 8 23:56:44.377384 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:44.377384 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:44.377384 ignition[913]: INFO : mount: mount passed Sep 8 23:56:44.377384 ignition[913]: INFO : Ignition finished successfully Sep 8 23:56:44.378403 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:56:44.389229 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:56:44.883996 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:56:44.896268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:56:44.903000 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (926) Sep 8 23:56:44.903038 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:56:44.903048 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:56:44.903809 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:56:44.906109 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:56:44.907325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:56:44.923722 ignition[943]: INFO : Ignition 2.20.0 Sep 8 23:56:44.923722 ignition[943]: INFO : Stage: files Sep 8 23:56:44.925026 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:44.925026 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:44.925026 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:56:44.927760 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:56:44.927760 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:56:44.930635 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:56:44.931725 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:56:44.931725 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:56:44.931145 unknown[943]: wrote ssh authorized keys file for user: core Sep 8 23:56:44.934922 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 8 23:56:44.934922 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 8 23:56:45.418240 systemd-networkd[767]: eth0: Gained IPv6LL Sep 8 23:56:45.856797 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:56:46.829111 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 8 23:56:46.831232 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 8 23:56:47.207725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 8 23:56:47.602462 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 8 23:56:47.602462 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 8 23:56:47.605901 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:56:47.621326 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:56:47.625170 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:56:47.629822 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:56:47.629822 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:56:47.629822 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:56:47.629822 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:56:47.629822 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:56:47.629822 ignition[943]: INFO : files: files passed Sep 8 23:56:47.629822 ignition[943]: INFO : Ignition finished successfully Sep 8 23:56:47.628068 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:56:47.640315 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:56:47.643305 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:56:47.644626 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:56:47.646101 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:56:47.650629 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:56:47.654036 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:56:47.654036 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:56:47.656999 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:56:47.658947 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:56:47.660389 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:56:47.672302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:56:47.691847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:56:47.691953 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:56:47.693880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:56:47.695268 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:56:47.696755 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:56:47.697729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:56:47.712362 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:56:47.728308 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:56:47.736273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:56:47.737231 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:56:47.738889 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:56:47.740293 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:56:47.740424 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:56:47.742411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:56:47.743955 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:56:47.745318 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:56:47.746685 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:56:47.748167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:56:47.749820 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:56:47.751225 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:56:47.752748 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:56:47.754292 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:56:47.755618 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:56:47.756738 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:56:47.756865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:56:47.758739 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:56:47.760222 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:56:47.761753 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:56:47.761836 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:56:47.763370 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:56:47.763485 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:56:47.765749 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:56:47.765861 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:56:47.767293 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:56:47.768462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:56:47.773125 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:56:47.775108 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:56:47.775825 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:56:47.777023 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:56:47.777121 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:56:47.778382 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:56:47.778457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:56:47.779666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:56:47.779773 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:56:47.781056 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:56:47.781168 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:56:47.791263 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:56:47.792670 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:56:47.793344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:56:47.793455 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:56:47.795007 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:56:47.795114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:56:47.801199 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:56:47.803544 ignition[999]: INFO : Ignition 2.20.0 Sep 8 23:56:47.803544 ignition[999]: INFO : Stage: umount Sep 8 23:56:47.803544 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:56:47.803544 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:56:47.801357 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:56:47.808471 ignition[999]: INFO : umount: umount passed Sep 8 23:56:47.808471 ignition[999]: INFO : Ignition finished successfully Sep 8 23:56:47.806389 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:56:47.806501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:56:47.807991 systemd[1]: Stopped target network.target - Network. Sep 8 23:56:47.809135 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:56:47.809210 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:56:47.810743 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:56:47.810792 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:56:47.812133 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:56:47.812187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:56:47.813648 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:56:47.813693 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:56:47.815242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:56:47.816625 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:56:47.818830 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:56:47.823654 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:56:47.823769 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:56:47.827996 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:56:47.828264 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:56:47.828358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:56:47.830964 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:56:47.831565 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:56:47.831636 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:56:47.840205 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:56:47.840914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:56:47.840976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:56:47.842588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:56:47.842638 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:56:47.844968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:56:47.845011 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:56:47.846541 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:56:47.846578 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:56:47.848880 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:56:47.852850 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:56:47.852906 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:56:47.859721 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:56:47.859851 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:56:47.866806 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:56:47.866950 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:56:47.869587 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:56:47.869662 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:56:47.870734 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:56:47.870767 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:56:47.874001 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:56:47.874055 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:56:47.876440 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:56:47.876495 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:56:47.878030 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:56:47.878082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:56:47.891272 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:56:47.892118 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:56:47.892178 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:56:47.894870 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 8 23:56:47.894918 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:56:47.896750 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:56:47.896792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:56:47.898365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:56:47.898411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:47.901582 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:56:47.901656 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:56:47.901969 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:56:47.902075 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:56:47.903400 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:56:47.903491 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:56:47.906708 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:56:47.907802 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:56:47.907867 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:56:47.915242 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:56:47.920973 systemd[1]: Switching root. Sep 8 23:56:47.958136 systemd-journald[237]: Journal stopped Sep 8 23:56:48.689400 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 8 23:56:48.689455 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:56:48.689471 kernel: SELinux: policy capability open_perms=1 Sep 8 23:56:48.689481 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:56:48.689490 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:56:48.689500 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:56:48.689509 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:56:48.689536 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:56:48.689550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:56:48.689563 kernel: audit: type=1403 audit(1757375808.099:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:56:48.689574 systemd[1]: Successfully loaded SELinux policy in 31.198ms. Sep 8 23:56:48.689600 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.807ms. Sep 8 23:56:48.689612 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:56:48.689623 systemd[1]: Detected virtualization kvm. Sep 8 23:56:48.689634 systemd[1]: Detected architecture arm64. Sep 8 23:56:48.689665 systemd[1]: Detected first boot. Sep 8 23:56:48.689677 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:56:48.689687 zram_generator::config[1045]: No configuration found. Sep 8 23:56:48.689698 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:56:48.689708 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:56:48.689718 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:56:48.689729 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:56:48.689748 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:56:48.689758 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:56:48.689770 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:56:48.689780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:56:48.689790 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:56:48.689800 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:56:48.689812 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:56:48.689822 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:56:48.689832 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:56:48.689842 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:56:48.689854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:56:48.689865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:56:48.689875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:56:48.689885 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:56:48.689896 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:56:48.689906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:56:48.689916 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:56:48.689926 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:56:48.689937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:56:48.689949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:56:48.689960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:56:48.689970 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:56:48.689980 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:56:48.689990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:56:48.690000 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:56:48.690010 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:56:48.690020 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:56:48.690032 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:56:48.690042 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:56:48.690052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:56:48.690063 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:56:48.690073 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:56:48.690083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:56:48.691139 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:56:48.691161 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:56:48.691173 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:56:48.691190 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:56:48.691201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:56:48.691211 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:56:48.691223 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:56:48.691233 systemd[1]: Reached target machines.target - Containers. Sep 8 23:56:48.691243 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:56:48.691254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:56:48.691264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:56:48.691275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:56:48.691286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:56:48.691297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:56:48.691307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:56:48.691320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:56:48.691330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:56:48.691341 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:56:48.691351 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:56:48.691361 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:56:48.691373 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:56:48.691385 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:56:48.691395 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:56:48.691406 kernel: loop: module loaded Sep 8 23:56:48.691416 kernel: ACPI: bus type drm_connector registered Sep 8 23:56:48.691426 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:56:48.691436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:56:48.691446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:56:48.691456 kernel: fuse: init (API version 7.39) Sep 8 23:56:48.691467 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:56:48.691478 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:56:48.691516 systemd-journald[1113]: Collecting audit messages is disabled. Sep 8 23:56:48.691540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:56:48.691550 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:56:48.691560 systemd[1]: Stopped verity-setup.service. Sep 8 23:56:48.691573 systemd-journald[1113]: Journal started Sep 8 23:56:48.691603 systemd-journald[1113]: Runtime Journal (/run/log/journal/9582bf4286014e7db487614be2143cc3) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:56:48.483403 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:56:48.498284 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:56:48.498719 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:56:48.697012 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:56:48.698248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:56:48.699279 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:56:48.700426 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:56:48.701362 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:56:48.702419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:56:48.703524 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:56:48.704734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:56:48.707449 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:56:48.707626 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:56:48.709011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:56:48.709216 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:56:48.710356 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:56:48.710503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:56:48.711775 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:56:48.711919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:56:48.714644 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:56:48.715829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:56:48.715975 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:56:48.717179 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:56:48.717325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:56:48.718489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:56:48.720660 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:56:48.722250 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:56:48.723757 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:56:48.737165 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:56:48.749249 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:56:48.751455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:56:48.752500 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:56:48.752551 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:56:48.754402 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:56:48.756670 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:56:48.758914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:56:48.760026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:56:48.761531 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:56:48.763740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:56:48.764866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:56:48.767289 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:56:48.768547 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:56:48.771269 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:56:48.776261 systemd-journald[1113]: Time spent on flushing to /var/log/journal/9582bf4286014e7db487614be2143cc3 is 16.590ms for 867 entries. Sep 8 23:56:48.776261 systemd-journald[1113]: System Journal (/var/log/journal/9582bf4286014e7db487614be2143cc3) is 8M, max 195.6M, 187.6M free. Sep 8 23:56:48.810509 systemd-journald[1113]: Received client request to flush runtime journal. Sep 8 23:56:48.810554 kernel: loop0: detected capacity change from 0 to 113512 Sep 8 23:56:48.810567 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:56:48.777360 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:56:48.780343 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:56:48.786134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:56:48.787558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:56:48.789654 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:56:48.791746 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:56:48.794401 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:56:48.800690 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:56:48.811375 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:56:48.816180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:56:48.817549 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:56:48.819976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:56:48.826342 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Sep 8 23:56:48.826359 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Sep 8 23:56:48.831103 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:56:48.832633 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:56:48.840403 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:56:48.842167 kernel: loop1: detected capacity change from 0 to 203944 Sep 8 23:56:48.843916 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:56:48.862340 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:56:48.868338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:56:48.872115 kernel: loop2: detected capacity change from 0 to 123192 Sep 8 23:56:48.881885 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 8 23:56:48.881905 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 8 23:56:48.886479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:56:48.936139 kernel: loop3: detected capacity change from 0 to 113512 Sep 8 23:56:48.941281 kernel: loop4: detected capacity change from 0 to 203944 Sep 8 23:56:48.948110 kernel: loop5: detected capacity change from 0 to 123192 Sep 8 23:56:48.951473 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:56:48.951906 (sd-merge)[1191]: Merged extensions into '/usr'. Sep 8 23:56:48.958583 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:56:48.958609 systemd[1]: Reloading... Sep 8 23:56:49.018546 zram_generator::config[1216]: No configuration found. Sep 8 23:56:49.074856 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:56:49.119848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:56:49.169368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:56:49.169749 systemd[1]: Reloading finished in 210 ms. Sep 8 23:56:49.187817 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:56:49.189117 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:56:49.204367 systemd[1]: Starting ensure-sysext.service... Sep 8 23:56:49.206054 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:56:49.216258 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:56:49.216285 systemd[1]: Reloading... Sep 8 23:56:49.230204 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:56:49.230398 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:56:49.231035 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:56:49.231244 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 8 23:56:49.231290 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Sep 8 23:56:49.243604 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:56:49.243758 systemd-tmpfiles[1255]: Skipping /boot Sep 8 23:56:49.253882 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:56:49.254004 systemd-tmpfiles[1255]: Skipping /boot Sep 8 23:56:49.278121 zram_generator::config[1285]: No configuration found. Sep 8 23:56:49.358968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:56:49.408332 systemd[1]: Reloading finished in 191 ms. Sep 8 23:56:49.419886 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:56:49.443134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:56:49.450852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:56:49.453444 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:56:49.456023 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:56:49.459657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:56:49.464627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:56:49.468458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:56:49.472390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:56:49.474409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:56:49.479324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:56:49.484402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:56:49.485819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:56:49.485947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:56:49.488237 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:56:49.489942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:56:49.492149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:56:49.494009 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:56:49.497852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:56:49.498005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:56:49.506956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:56:49.507376 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Sep 8 23:56:49.515375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:56:49.519346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:56:49.520441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:56:49.520561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:56:49.524439 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:56:49.526477 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:56:49.526847 augenrules[1355]: No rules Sep 8 23:56:49.528829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:56:49.530722 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:56:49.531388 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:56:49.535141 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:56:49.538744 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:56:49.538905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:56:49.540793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:56:49.540939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:56:49.542726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:56:49.543160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:56:49.545819 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:56:49.570252 systemd[1]: Finished ensure-sysext.service. Sep 8 23:56:49.572622 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:56:49.581003 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:56:49.584120 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1374) Sep 8 23:56:49.597327 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:56:49.598169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:56:49.600274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:56:49.603476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:56:49.607538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:56:49.611303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:56:49.612541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:56:49.612680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:56:49.616327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:56:49.621398 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:56:49.624311 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:56:49.624900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:56:49.625064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:56:49.626219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:56:49.626385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:56:49.627579 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:56:49.628552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:56:49.638458 augenrules[1394]: /sbin/augenrules: No change Sep 8 23:56:49.639571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:56:49.642637 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:56:49.642844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:56:49.648150 augenrules[1425]: No rules Sep 8 23:56:49.650553 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:56:49.650796 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:56:49.652020 systemd-resolved[1324]: Positive Trust Anchors: Sep 8 23:56:49.652036 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:56:49.652068 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:56:49.658997 systemd-resolved[1324]: Defaulting to hostname 'linux'. Sep 8 23:56:49.664579 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:56:49.665553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:56:49.665632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:56:49.665773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:56:49.671006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:56:49.681577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:56:49.694225 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:56:49.695418 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:56:49.722973 systemd-networkd[1409]: lo: Link UP Sep 8 23:56:49.722984 systemd-networkd[1409]: lo: Gained carrier Sep 8 23:56:49.723927 systemd-networkd[1409]: Enumeration completed Sep 8 23:56:49.724768 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:49.724775 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:56:49.725175 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:56:49.726173 systemd-networkd[1409]: eth0: Link UP Sep 8 23:56:49.726180 systemd-networkd[1409]: eth0: Gained carrier Sep 8 23:56:49.726194 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:56:49.732826 systemd[1]: Reached target network.target - Network. Sep 8 23:56:49.744172 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:56:49.744705 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Sep 8 23:56:49.745267 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:56:49.745444 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:56:49.745491 systemd-timesyncd[1410]: Initial clock synchronization to Mon 2025-09-08 23:56:49.906161 UTC. Sep 8 23:56:49.747247 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:56:49.749084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:56:49.750341 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:56:49.756330 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:56:49.764039 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:56:49.767985 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:56:49.788454 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:56:49.802606 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:56:49.803891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:56:49.804832 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:56:49.805761 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:56:49.806770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:56:49.807920 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:56:49.808905 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:56:49.809924 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:56:49.810925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:56:49.810958 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:56:49.811887 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:56:49.813507 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:56:49.815720 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:56:49.818679 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:56:49.819832 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:56:49.820848 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:56:49.823772 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:56:49.825012 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:56:49.827206 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:56:49.828560 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:56:49.829470 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:56:49.830204 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:56:49.830901 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:56:49.830935 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:56:49.831850 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:56:49.833696 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:56:49.835119 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:56:49.837249 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:56:49.840333 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:56:49.842357 jq[1456]: false Sep 8 23:56:49.843207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:56:49.845326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:56:49.849252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:56:49.852412 dbus-daemon[1455]: [system] SELinux support is enabled Sep 8 23:56:49.853710 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:56:49.856141 extend-filesystems[1457]: Found loop3 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found loop4 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found loop5 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda1 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda2 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda3 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found usr Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda4 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda6 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda7 Sep 8 23:56:49.857243 extend-filesystems[1457]: Found vda9 Sep 8 23:56:49.857243 extend-filesystems[1457]: Checking size of /dev/vda9 Sep 8 23:56:49.857745 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:56:49.863312 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:56:49.866181 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:56:49.866647 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:56:49.872786 extend-filesystems[1457]: Resized partition /dev/vda9 Sep 8 23:56:49.872310 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:56:49.875454 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:56:49.876480 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:56:49.881162 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:56:49.884416 jq[1477]: true Sep 8 23:56:49.885114 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Sep 8 23:56:49.885713 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:56:49.889916 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:56:49.892253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:56:49.892600 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:56:49.892779 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:56:49.895555 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:56:49.895961 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:56:49.899277 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:56:49.910517 update_engine[1472]: I20250908 23:56:49.908859 1472 main.cc:92] Flatcar Update Engine starting Sep 8 23:56:49.912268 update_engine[1472]: I20250908 23:56:49.911670 1472 update_check_scheduler.cc:74] Next update check in 2m34s Sep 8 23:56:49.911118 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:56:49.916961 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:56:49.917437 jq[1481]: true Sep 8 23:56:49.924922 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:56:49.924958 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:56:49.926158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:56:49.926188 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:56:49.935447 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:56:49.944530 tar[1480]: linux-arm64/helm Sep 8 23:56:49.964390 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:56:49.965056 systemd-logind[1470]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:56:49.965650 systemd-logind[1470]: New seat seat0. Sep 8 23:56:49.973720 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:56:49.970955 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:56:49.982717 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:56:49.982717 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:56:49.982717 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:56:49.988131 extend-filesystems[1457]: Resized filesystem in /dev/vda9 Sep 8 23:56:49.990549 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:56:49.984069 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:56:49.984269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:56:49.991163 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:56:49.993274 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:56:50.088608 containerd[1485]: time="2025-09-08T23:56:50.088515811Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:56:50.120617 containerd[1485]: time="2025-09-08T23:56:50.120525415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122140 containerd[1485]: time="2025-09-08T23:56:50.122095837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122172 containerd[1485]: time="2025-09-08T23:56:50.122140803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:56:50.122190 containerd[1485]: time="2025-09-08T23:56:50.122167897Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:56:50.122351 containerd[1485]: time="2025-09-08T23:56:50.122330990Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:56:50.122378 containerd[1485]: time="2025-09-08T23:56:50.122354616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122448 containerd[1485]: time="2025-09-08T23:56:50.122429042Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122473 containerd[1485]: time="2025-09-08T23:56:50.122446221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122671 containerd[1485]: time="2025-09-08T23:56:50.122648812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122703 containerd[1485]: time="2025-09-08T23:56:50.122670235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122703 containerd[1485]: time="2025-09-08T23:56:50.122684394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122703 containerd[1485]: time="2025-09-08T23:56:50.122694064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122790 containerd[1485]: time="2025-09-08T23:56:50.122772408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.122982 containerd[1485]: time="2025-09-08T23:56:50.122963289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:56:50.123149 containerd[1485]: time="2025-09-08T23:56:50.123128749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:56:50.123176 containerd[1485]: time="2025-09-08T23:56:50.123149681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:56:50.123251 containerd[1485]: time="2025-09-08T23:56:50.123233329Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:56:50.124144 containerd[1485]: time="2025-09-08T23:56:50.123282416Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:56:50.126317 containerd[1485]: time="2025-09-08T23:56:50.126289627Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:56:50.126389 containerd[1485]: time="2025-09-08T23:56:50.126343162Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:56:50.126389 containerd[1485]: time="2025-09-08T23:56:50.126360218Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:56:50.126389 containerd[1485]: time="2025-09-08T23:56:50.126377233Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:56:50.126449 containerd[1485]: time="2025-09-08T23:56:50.126391637Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:56:50.126552 containerd[1485]: time="2025-09-08T23:56:50.126528943Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:56:50.126786 containerd[1485]: time="2025-09-08T23:56:50.126768625Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:56:50.126884 containerd[1485]: time="2025-09-08T23:56:50.126868840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:56:50.126908 containerd[1485]: time="2025-09-08T23:56:50.126889078Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:56:50.126908 containerd[1485]: time="2025-09-08T23:56:50.126904502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:56:50.126950 containerd[1485]: time="2025-09-08T23:56:50.126918620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.126950 containerd[1485]: time="2025-09-08T23:56:50.126932208Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.126950 containerd[1485]: time="2025-09-08T23:56:50.126945633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127003 containerd[1485]: time="2025-09-08T23:56:50.126959832Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127003 containerd[1485]: time="2025-09-08T23:56:50.126974889Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127003 containerd[1485]: time="2025-09-08T23:56:50.126989946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127056 containerd[1485]: time="2025-09-08T23:56:50.127011613Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127056 containerd[1485]: time="2025-09-08T23:56:50.127024915Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:56:50.127056 containerd[1485]: time="2025-09-08T23:56:50.127045276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127148 containerd[1485]: time="2025-09-08T23:56:50.127059068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127148 containerd[1485]: time="2025-09-08T23:56:50.127072125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127148 containerd[1485]: time="2025-09-08T23:56:50.127094037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127148 containerd[1485]: time="2025-09-08T23:56:50.127131372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127156344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127169524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127181928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127195720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127211511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127228 containerd[1485]: time="2025-09-08T23:56:50.127223059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127235708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127248439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127263985Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127285285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127298913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127325 containerd[1485]: time="2025-09-08T23:56:50.127309849Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:56:50.127548 containerd[1485]: time="2025-09-08T23:56:50.127500199Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:56:50.127548 containerd[1485]: time="2025-09-08T23:56:50.127521376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:56:50.127548 containerd[1485]: time="2025-09-08T23:56:50.127532190Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:56:50.127548 containerd[1485]: time="2025-09-08T23:56:50.127545818Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:56:50.127770 containerd[1485]: time="2025-09-08T23:56:50.127555733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127770 containerd[1485]: time="2025-09-08T23:56:50.127568505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:56:50.127770 containerd[1485]: time="2025-09-08T23:56:50.127578543Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:56:50.127770 containerd[1485]: time="2025-09-08T23:56:50.127590172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:56:50.127973 containerd[1485]: time="2025-09-08T23:56:50.127929498Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:56:50.128083 containerd[1485]: time="2025-09-08T23:56:50.127978585Z" level=info msg="Connect containerd service" Sep 8 23:56:50.128083 containerd[1485]: time="2025-09-08T23:56:50.128007433Z" level=info msg="using legacy CRI server" Sep 8 23:56:50.128083 containerd[1485]: time="2025-09-08T23:56:50.128014043Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:56:50.128361 containerd[1485]: time="2025-09-08T23:56:50.128343291Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:56:50.129034 containerd[1485]: time="2025-09-08T23:56:50.129006722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:56:50.129426 containerd[1485]: time="2025-09-08T23:56:50.129307815Z" level=info msg="Start subscribing containerd event" Sep 8 23:56:50.129587 containerd[1485]: time="2025-09-08T23:56:50.129568430Z" level=info msg="Start recovering state" Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129740949Z" level=info msg="Start event monitor" Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129764574Z" level=info msg="Start snapshots syncer" Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129775020Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129784282Z" level=info msg="Start streaming server" Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129588913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:56:50.129916 containerd[1485]: time="2025-09-08T23:56:50.129894535Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:56:50.132631 containerd[1485]: time="2025-09-08T23:56:50.130565637Z" level=info msg="containerd successfully booted in 0.044884s" Sep 8 23:56:50.130693 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:56:50.302183 tar[1480]: linux-arm64/LICENSE Sep 8 23:56:50.302380 tar[1480]: linux-arm64/README.md Sep 8 23:56:50.316152 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:56:51.457628 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:56:51.476454 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:56:51.491390 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:56:51.496714 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:56:51.496982 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:56:51.498227 systemd-networkd[1409]: eth0: Gained IPv6LL Sep 8 23:56:51.500267 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:56:51.501528 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:56:51.503376 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:56:51.505671 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:56:51.507886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:56:51.510824 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:56:51.521384 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:56:51.530539 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:56:51.533686 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:56:51.535626 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:56:51.538136 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:56:51.541980 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:56:51.542441 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:56:51.546247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:56:52.096695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:56:52.098249 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:56:52.100007 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:56:52.100127 systemd[1]: Startup finished in 544ms (kernel) + 6.424s (initrd) + 4.032s (userspace) = 11.001s. Sep 8 23:56:52.488229 kubelet[1568]: E0908 23:56:52.488083 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:56:52.490472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:56:52.490629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:56:52.492181 systemd[1]: kubelet.service: Consumed 777ms CPU time, 259.7M memory peak. Sep 8 23:56:54.165785 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:56:54.167015 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:46622.service - OpenSSH per-connection server daemon (10.0.0.1:46622). Sep 8 23:56:54.225772 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 46622 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:56:54.227543 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:56:54.237570 systemd-logind[1470]: New session 1 of user core. Sep 8 23:56:54.238574 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:56:54.248351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:56:54.257268 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:56:54.259242 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:56:54.266178 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:56:54.268188 systemd-logind[1470]: New session c1 of user core. Sep 8 23:56:54.373874 systemd[1585]: Queued start job for default target default.target. Sep 8 23:56:54.381069 systemd[1585]: Created slice app.slice - User Application Slice. Sep 8 23:56:54.381097 systemd[1585]: Reached target paths.target - Paths. Sep 8 23:56:54.381151 systemd[1585]: Reached target timers.target - Timers. Sep 8 23:56:54.382337 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:56:54.391152 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:56:54.391211 systemd[1585]: Reached target sockets.target - Sockets. Sep 8 23:56:54.391246 systemd[1585]: Reached target basic.target - Basic System. Sep 8 23:56:54.391274 systemd[1585]: Reached target default.target - Main User Target. Sep 8 23:56:54.391299 systemd[1585]: Startup finished in 117ms. Sep 8 23:56:54.391446 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:56:54.392847 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:56:54.455336 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Sep 8 23:56:54.501748 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:56:54.503150 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:56:54.510325 systemd-logind[1470]: New session 2 of user core. Sep 8 23:56:54.522287 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:56:54.573740 sshd[1598]: Connection closed by 10.0.0.1 port 46624 Sep 8 23:56:54.574092 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Sep 8 23:56:54.587720 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:46624.service: Deactivated successfully. Sep 8 23:56:54.590480 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:56:54.591663 systemd-logind[1470]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:56:54.592843 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:46636.service - OpenSSH per-connection server daemon (10.0.0.1:46636). Sep 8 23:56:54.593780 systemd-logind[1470]: Removed session 2. Sep 8 23:56:54.635472 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 46636 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:56:54.636804 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:56:54.640822 systemd-logind[1470]: New session 3 of user core. Sep 8 23:56:54.649263 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:56:54.697592 sshd[1606]: Connection closed by 10.0.0.1 port 46636 Sep 8 23:56:54.698078 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Sep 8 23:56:54.709544 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:46636.service: Deactivated successfully. Sep 8 23:56:54.710879 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:56:54.711560 systemd-logind[1470]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:56:54.727447 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:46648.service - OpenSSH per-connection server daemon (10.0.0.1:46648). Sep 8 23:56:54.728386 systemd-logind[1470]: Removed session 3. Sep 8 23:56:54.765934 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 46648 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:56:54.767188 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:56:54.771443 systemd-logind[1470]: New session 4 of user core. Sep 8 23:56:54.782304 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:56:54.833651 sshd[1614]: Connection closed by 10.0.0.1 port 46648 Sep 8 23:56:54.834124 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Sep 8 23:56:54.851795 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:46648.service: Deactivated successfully. Sep 8 23:56:54.853327 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:56:54.854451 systemd-logind[1470]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:56:54.855566 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:46660.service - OpenSSH per-connection server daemon (10.0.0.1:46660). Sep 8 23:56:54.857274 systemd-logind[1470]: Removed session 4. Sep 8 23:56:54.897629 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 46660 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:56:54.898834 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:56:54.903380 systemd-logind[1470]: New session 5 of user core. Sep 8 23:56:54.926300 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:56:54.983409 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:56:54.984009 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:56:55.268357 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:56:55.268442 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:56:55.478358 dockerd[1643]: time="2025-09-08T23:56:55.478305344Z" level=info msg="Starting up" Sep 8 23:56:55.689234 dockerd[1643]: time="2025-09-08T23:56:55.688909942Z" level=info msg="Loading containers: start." Sep 8 23:56:55.830135 kernel: Initializing XFRM netlink socket Sep 8 23:56:55.899637 systemd-networkd[1409]: docker0: Link UP Sep 8 23:56:55.938475 dockerd[1643]: time="2025-09-08T23:56:55.938430631Z" level=info msg="Loading containers: done." Sep 8 23:56:55.952736 dockerd[1643]: time="2025-09-08T23:56:55.952635736Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:56:55.952861 dockerd[1643]: time="2025-09-08T23:56:55.952735030Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:56:55.953043 dockerd[1643]: time="2025-09-08T23:56:55.952905287Z" level=info msg="Daemon has completed initialization" Sep 8 23:56:55.979968 dockerd[1643]: time="2025-09-08T23:56:55.979866270Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:56:55.980028 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:56:56.478223 containerd[1485]: time="2025-09-08T23:56:56.478182472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 8 23:56:57.112252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13835276.mount: Deactivated successfully. Sep 8 23:56:57.947734 containerd[1485]: time="2025-09-08T23:56:57.947667171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:57.948825 containerd[1485]: time="2025-09-08T23:56:57.948779724Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 8 23:56:57.950389 containerd[1485]: time="2025-09-08T23:56:57.950020440Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:57.954540 containerd[1485]: time="2025-09-08T23:56:57.954508344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:57.956828 containerd[1485]: time="2025-09-08T23:56:57.956789528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.478563718s" Sep 8 23:56:57.956945 containerd[1485]: time="2025-09-08T23:56:57.956927247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 8 23:56:57.958263 containerd[1485]: time="2025-09-08T23:56:57.958235451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 8 23:56:59.112530 containerd[1485]: time="2025-09-08T23:56:59.112475966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:59.115746 containerd[1485]: time="2025-09-08T23:56:59.115675344Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 8 23:56:59.117125 containerd[1485]: time="2025-09-08T23:56:59.116498850Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:59.119457 containerd[1485]: time="2025-09-08T23:56:59.119421687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:56:59.120634 containerd[1485]: time="2025-09-08T23:56:59.120612037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.16234399s" Sep 8 23:56:59.120679 containerd[1485]: time="2025-09-08T23:56:59.120640528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 8 23:56:59.121043 containerd[1485]: time="2025-09-08T23:56:59.121023186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 8 23:57:00.199132 containerd[1485]: time="2025-09-08T23:57:00.199065876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:00.199982 containerd[1485]: time="2025-09-08T23:57:00.199663017Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 8 23:57:00.200703 containerd[1485]: time="2025-09-08T23:57:00.200667943Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:00.203881 containerd[1485]: time="2025-09-08T23:57:00.203848914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:00.205228 containerd[1485]: time="2025-09-08T23:57:00.205199256Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.084145414s" Sep 8 23:57:00.205273 containerd[1485]: time="2025-09-08T23:57:00.205232752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 8 23:57:00.205712 containerd[1485]: time="2025-09-08T23:57:00.205681030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 8 23:57:01.086366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190575682.mount: Deactivated successfully. Sep 8 23:57:01.302531 containerd[1485]: time="2025-09-08T23:57:01.302476379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:01.303468 containerd[1485]: time="2025-09-08T23:57:01.303422415Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 8 23:57:01.304331 containerd[1485]: time="2025-09-08T23:57:01.304301584Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:01.306494 containerd[1485]: time="2025-09-08T23:57:01.306461773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:01.307443 containerd[1485]: time="2025-09-08T23:57:01.307055025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.101343319s" Sep 8 23:57:01.307443 containerd[1485]: time="2025-09-08T23:57:01.307081225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 8 23:57:01.307764 containerd[1485]: time="2025-09-08T23:57:01.307732223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:57:01.783902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323824303.mount: Deactivated successfully. Sep 8 23:57:02.408938 containerd[1485]: time="2025-09-08T23:57:02.408888737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:02.409887 containerd[1485]: time="2025-09-08T23:57:02.409590566Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 8 23:57:02.412212 containerd[1485]: time="2025-09-08T23:57:02.410622286Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:02.413942 containerd[1485]: time="2025-09-08T23:57:02.413899580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:02.415249 containerd[1485]: time="2025-09-08T23:57:02.415213558Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.107451724s" Sep 8 23:57:02.415249 containerd[1485]: time="2025-09-08T23:57:02.415247013Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 8 23:57:02.415694 containerd[1485]: time="2025-09-08T23:57:02.415667267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:57:02.741560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:57:02.749339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:02.850114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:02.854304 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:57:02.965194 kubelet[1973]: E0908 23:57:02.965143 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:57:02.968662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:57:02.968808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:57:02.969319 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.7M memory peak. Sep 8 23:57:03.100504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131667269.mount: Deactivated successfully. Sep 8 23:57:03.104137 containerd[1485]: time="2025-09-08T23:57:03.104079283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:03.104898 containerd[1485]: time="2025-09-08T23:57:03.104813233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:57:03.105968 containerd[1485]: time="2025-09-08T23:57:03.105567897Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:03.107943 containerd[1485]: time="2025-09-08T23:57:03.107911971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:03.108987 containerd[1485]: time="2025-09-08T23:57:03.108948429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 693.254419ms" Sep 8 23:57:03.108987 containerd[1485]: time="2025-09-08T23:57:03.108983272Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:57:03.109719 containerd[1485]: time="2025-09-08T23:57:03.109697513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 8 23:57:03.602669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099677917.mount: Deactivated successfully. Sep 8 23:57:05.070988 containerd[1485]: time="2025-09-08T23:57:05.070933177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:05.072245 containerd[1485]: time="2025-09-08T23:57:05.072200084Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 8 23:57:05.073123 containerd[1485]: time="2025-09-08T23:57:05.072844747Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:05.077131 containerd[1485]: time="2025-09-08T23:57:05.077070738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:05.078336 containerd[1485]: time="2025-09-08T23:57:05.077922882Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.96819551s" Sep 8 23:57:05.078336 containerd[1485]: time="2025-09-08T23:57:05.077954889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 8 23:57:09.023876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:09.024020 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.7M memory peak. Sep 8 23:57:09.036296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:09.058312 systemd[1]: Reload requested from client PID 2069 ('systemctl') (unit session-5.scope)... Sep 8 23:57:09.058327 systemd[1]: Reloading... Sep 8 23:57:09.115181 zram_generator::config[2113]: No configuration found. Sep 8 23:57:09.225981 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:09.297577 systemd[1]: Reloading finished in 238 ms. Sep 8 23:57:09.335400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:09.336794 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:09.338859 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:57:09.339048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:09.339083 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Sep 8 23:57:09.341373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:09.435740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:09.439852 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:57:09.471993 kubelet[2160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:09.471993 kubelet[2160]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 8 23:57:09.471993 kubelet[2160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:09.472353 kubelet[2160]: I0908 23:57:09.472043 2160 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:57:10.060438 kubelet[2160]: I0908 23:57:10.060383 2160 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 8 23:57:10.060438 kubelet[2160]: I0908 23:57:10.060427 2160 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:57:10.060721 kubelet[2160]: I0908 23:57:10.060690 2160 server.go:934] "Client rotation is on, will bootstrap in background" Sep 8 23:57:10.080756 kubelet[2160]: E0908 23:57:10.080710 2160 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:10.082942 kubelet[2160]: I0908 23:57:10.082913 2160 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:57:10.170651 kubelet[2160]: E0908 23:57:10.170515 2160 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:57:10.170651 kubelet[2160]: I0908 23:57:10.170637 2160 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:57:10.174784 kubelet[2160]: I0908 23:57:10.174139 2160 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:57:10.175144 kubelet[2160]: I0908 23:57:10.175105 2160 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 8 23:57:10.175292 kubelet[2160]: I0908 23:57:10.175250 2160 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:57:10.175456 kubelet[2160]: I0908 23:57:10.175283 2160 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:57:10.175456 kubelet[2160]: I0908 23:57:10.175453 2160 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:57:10.175562 kubelet[2160]: I0908 23:57:10.175462 2160 container_manager_linux.go:300] "Creating device plugin manager" Sep 8 23:57:10.175732 kubelet[2160]: I0908 23:57:10.175695 2160 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:10.177779 kubelet[2160]: I0908 23:57:10.177722 2160 kubelet.go:408] "Attempting to sync node with API server" Sep 8 23:57:10.177779 kubelet[2160]: I0908 23:57:10.177751 2160 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:57:10.177779 kubelet[2160]: I0908 23:57:10.177777 2160 kubelet.go:314] "Adding apiserver pod source" Sep 8 23:57:10.177921 kubelet[2160]: I0908 23:57:10.177791 2160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:57:10.185572 kubelet[2160]: W0908 23:57:10.184464 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:10.185572 kubelet[2160]: E0908 23:57:10.184526 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:10.188213 kubelet[2160]: I0908 23:57:10.185836 2160 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:57:10.191793 kubelet[2160]: W0908 23:57:10.190592 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:10.191793 kubelet[2160]: E0908 23:57:10.190671 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:10.191793 kubelet[2160]: I0908 23:57:10.190700 2160 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:57:10.191793 kubelet[2160]: W0908 23:57:10.190982 2160 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:57:10.192232 kubelet[2160]: I0908 23:57:10.192178 2160 server.go:1274] "Started kubelet" Sep 8 23:57:10.197051 kubelet[2160]: I0908 23:57:10.192756 2160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:57:10.197051 kubelet[2160]: I0908 23:57:10.195221 2160 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:57:10.197051 kubelet[2160]: I0908 23:57:10.196367 2160 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:57:10.197051 kubelet[2160]: I0908 23:57:10.196544 2160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:57:10.198202 kubelet[2160]: E0908 23:57:10.196826 2160 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637401af75e02c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:57:10.19215262 +0000 UTC m=+0.748975617,LastTimestamp:2025-09-08 23:57:10.19215262 +0000 UTC m=+0.748975617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:57:10.198499 kubelet[2160]: I0908 23:57:10.198481 2160 server.go:449] "Adding debug handlers to kubelet server" Sep 8 23:57:10.199575 kubelet[2160]: I0908 23:57:10.199546 2160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:57:10.200545 kubelet[2160]: E0908 23:57:10.200516 2160 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:57:10.201210 kubelet[2160]: E0908 23:57:10.201071 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:10.201210 kubelet[2160]: I0908 23:57:10.201114 2160 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 8 23:57:10.201306 kubelet[2160]: I0908 23:57:10.201235 2160 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 8 23:57:10.201328 kubelet[2160]: I0908 23:57:10.201305 2160 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:57:10.201673 kubelet[2160]: I0908 23:57:10.201644 2160 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:57:10.201768 kubelet[2160]: I0908 23:57:10.201737 2160 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:57:10.201801 kubelet[2160]: W0908 23:57:10.201735 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:10.201944 kubelet[2160]: E0908 23:57:10.201905 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Sep 8 23:57:10.202008 kubelet[2160]: E0908 23:57:10.201947 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:10.202841 kubelet[2160]: I0908 23:57:10.202818 2160 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:57:10.213162 kubelet[2160]: I0908 23:57:10.213112 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:57:10.214697 kubelet[2160]: I0908 23:57:10.214650 2160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:57:10.214697 kubelet[2160]: I0908 23:57:10.214682 2160 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 8 23:57:10.214697 kubelet[2160]: I0908 23:57:10.214700 2160 kubelet.go:2321] "Starting kubelet main sync loop" Sep 8 23:57:10.214818 kubelet[2160]: E0908 23:57:10.214741 2160 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:57:10.218976 kubelet[2160]: W0908 23:57:10.218917 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:10.218976 kubelet[2160]: E0908 23:57:10.218976 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:10.219873 kubelet[2160]: I0908 23:57:10.219850 2160 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 8 23:57:10.219873 kubelet[2160]: I0908 23:57:10.219866 2160 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 8 23:57:10.219944 kubelet[2160]: I0908 23:57:10.219885 2160 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:10.221810 kubelet[2160]: I0908 23:57:10.221780 2160 policy_none.go:49] "None policy: Start" Sep 8 23:57:10.222434 kubelet[2160]: I0908 23:57:10.222384 2160 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 8 23:57:10.222482 kubelet[2160]: I0908 23:57:10.222450 2160 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:57:10.230051 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:57:10.242583 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:57:10.245845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:57:10.260175 kubelet[2160]: I0908 23:57:10.259964 2160 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:57:10.260175 kubelet[2160]: I0908 23:57:10.260180 2160 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:57:10.260329 kubelet[2160]: I0908 23:57:10.260192 2160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:57:10.260524 kubelet[2160]: I0908 23:57:10.260492 2160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:57:10.262262 kubelet[2160]: E0908 23:57:10.262239 2160 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:57:10.322341 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 8 23:57:10.335496 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 8 23:57:10.354228 systemd[1]: Created slice kubepods-burstable-podafc63b673d4bb8b427c4013c9070c620.slice - libcontainer container kubepods-burstable-podafc63b673d4bb8b427c4013c9070c620.slice. Sep 8 23:57:10.362894 kubelet[2160]: I0908 23:57:10.362843 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:57:10.363430 kubelet[2160]: E0908 23:57:10.363395 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 8 23:57:10.401901 kubelet[2160]: I0908 23:57:10.401852 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:10.401901 kubelet[2160]: I0908 23:57:10.401895 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:10.402049 kubelet[2160]: I0908 23:57:10.401917 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:10.402049 kubelet[2160]: I0908 23:57:10.401937 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:10.402049 kubelet[2160]: I0908 23:57:10.401951 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:10.402049 kubelet[2160]: I0908 23:57:10.401970 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:10.402049 kubelet[2160]: I0908 23:57:10.401986 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:10.402181 kubelet[2160]: I0908 23:57:10.402002 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:10.402181 kubelet[2160]: I0908 23:57:10.402016 2160 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:10.402501 kubelet[2160]: E0908 23:57:10.402450 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Sep 8 23:57:10.564666 kubelet[2160]: I0908 23:57:10.564618 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:57:10.565068 kubelet[2160]: E0908 23:57:10.564928 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 8 23:57:10.634172 kubelet[2160]: E0908 23:57:10.634004 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:10.634811 containerd[1485]: time="2025-09-08T23:57:10.634716904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:10.652127 kubelet[2160]: E0908 23:57:10.652079 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:10.652577 containerd[1485]: time="2025-09-08T23:57:10.652543827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:10.657127 kubelet[2160]: E0908 23:57:10.657054 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:10.657496 containerd[1485]: time="2025-09-08T23:57:10.657464462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:afc63b673d4bb8b427c4013c9070c620,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:10.803847 kubelet[2160]: E0908 23:57:10.803794 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Sep 8 23:57:10.966398 kubelet[2160]: I0908 23:57:10.966369 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:57:10.966729 kubelet[2160]: E0908 23:57:10.966691 2160 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 8 23:57:11.069524 kubelet[2160]: W0908 23:57:11.069453 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:11.069524 kubelet[2160]: E0908 23:57:11.069528 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:11.093455 kubelet[2160]: W0908 23:57:11.093385 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:11.093537 kubelet[2160]: E0908 23:57:11.093451 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:11.113466 kubelet[2160]: W0908 23:57:11.113406 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:11.113500 kubelet[2160]: E0908 23:57:11.113471 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:11.311219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083130849.mount: Deactivated successfully. Sep 8 23:57:11.318302 containerd[1485]: time="2025-09-08T23:57:11.317623763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:11.319871 containerd[1485]: time="2025-09-08T23:57:11.319829604Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:11.321345 containerd[1485]: time="2025-09-08T23:57:11.321298630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:57:11.321893 containerd[1485]: time="2025-09-08T23:57:11.321849460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 8 23:57:11.323128 containerd[1485]: time="2025-09-08T23:57:11.323024609Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:11.324480 containerd[1485]: time="2025-09-08T23:57:11.324441651Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:11.324571 containerd[1485]: time="2025-09-08T23:57:11.324533643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:57:11.328153 containerd[1485]: time="2025-09-08T23:57:11.328112033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:57:11.329243 containerd[1485]: time="2025-09-08T23:57:11.329134676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 694.337542ms" Sep 8 23:57:11.329908 containerd[1485]: time="2025-09-08T23:57:11.329875657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 672.3421ms" Sep 8 23:57:11.333530 containerd[1485]: time="2025-09-08T23:57:11.333214796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 680.597189ms" Sep 8 23:57:11.444234 containerd[1485]: time="2025-09-08T23:57:11.444142408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:11.444357 containerd[1485]: time="2025-09-08T23:57:11.444199277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:11.444357 containerd[1485]: time="2025-09-08T23:57:11.444216297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.444357 containerd[1485]: time="2025-09-08T23:57:11.444288665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.446123 containerd[1485]: time="2025-09-08T23:57:11.446035028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:11.446123 containerd[1485]: time="2025-09-08T23:57:11.446085410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446114525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446247727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446236233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446277363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446287535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.448166 containerd[1485]: time="2025-09-08T23:57:11.446354176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:11.468253 systemd[1]: Started cri-containerd-c47b1eabe86c161dab70a1e7da0728d7906e5e64ffdf342535641bd32f1a107a.scope - libcontainer container c47b1eabe86c161dab70a1e7da0728d7906e5e64ffdf342535641bd32f1a107a. Sep 8 23:57:11.469305 systemd[1]: Started cri-containerd-e750a6bc0b9a65ab48ccb3f0e374d05dac5cfeb90e07d926027bac3f1526a140.scope - libcontainer container e750a6bc0b9a65ab48ccb3f0e374d05dac5cfeb90e07d926027bac3f1526a140. Sep 8 23:57:11.472218 systemd[1]: Started cri-containerd-1a64f47c38bbb6905060cb50123e5cb1cd6aebd660637c6f433a0cc7ad51665c.scope - libcontainer container 1a64f47c38bbb6905060cb50123e5cb1cd6aebd660637c6f433a0cc7ad51665c. Sep 8 23:57:11.504332 kubelet[2160]: W0908 23:57:11.504254 2160 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 8 23:57:11.504332 kubelet[2160]: E0908 23:57:11.504322 2160 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:57:11.506413 containerd[1485]: time="2025-09-08T23:57:11.506314188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c47b1eabe86c161dab70a1e7da0728d7906e5e64ffdf342535641bd32f1a107a\"" Sep 8 23:57:11.507987 kubelet[2160]: E0908 23:57:11.507874 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:11.509943 containerd[1485]: time="2025-09-08T23:57:11.509895381Z" level=info msg="CreateContainer within sandbox \"c47b1eabe86c161dab70a1e7da0728d7906e5e64ffdf342535641bd32f1a107a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:57:11.511233 containerd[1485]: time="2025-09-08T23:57:11.511190396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:afc63b673d4bb8b427c4013c9070c620,Namespace:kube-system,Attempt:0,} returns sandbox id \"e750a6bc0b9a65ab48ccb3f0e374d05dac5cfeb90e07d926027bac3f1526a140\"" Sep 8 23:57:11.512038 kubelet[2160]: E0908 23:57:11.512015 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:11.513476 containerd[1485]: time="2025-09-08T23:57:11.513437047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a64f47c38bbb6905060cb50123e5cb1cd6aebd660637c6f433a0cc7ad51665c\"" Sep 8 23:57:11.514079 kubelet[2160]: E0908 23:57:11.514060 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:11.514193 containerd[1485]: time="2025-09-08T23:57:11.514161447Z" level=info msg="CreateContainer within sandbox \"e750a6bc0b9a65ab48ccb3f0e374d05dac5cfeb90e07d926027bac3f1526a140\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:57:11.516274 containerd[1485]: time="2025-09-08T23:57:11.516242177Z" level=info msg="CreateContainer within sandbox \"1a64f47c38bbb6905060cb50123e5cb1cd6aebd660637c6f433a0cc7ad51665c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:57:11.526306 containerd[1485]: time="2025-09-08T23:57:11.526259715Z" level=info msg="CreateContainer within sandbox \"c47b1eabe86c161dab70a1e7da0728d7906e5e64ffdf342535641bd32f1a107a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68e8625a77cab00e4b4cf4eb2bb79fb52f2763c5c06d6e857555e3c6ebc1b26c\"" Sep 8 23:57:11.527028 containerd[1485]: time="2025-09-08T23:57:11.526997532Z" level=info msg="StartContainer for \"68e8625a77cab00e4b4cf4eb2bb79fb52f2763c5c06d6e857555e3c6ebc1b26c\"" Sep 8 23:57:11.533398 containerd[1485]: time="2025-09-08T23:57:11.533360147Z" level=info msg="CreateContainer within sandbox \"e750a6bc0b9a65ab48ccb3f0e374d05dac5cfeb90e07d926027bac3f1526a140\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d3216b7b7aa11f70afddd670d649aa23ac44cb2a7bca4e57ec9f5f0469d25e1\"" Sep 8 23:57:11.534444 containerd[1485]: time="2025-09-08T23:57:11.534330967Z" level=info msg="StartContainer for \"7d3216b7b7aa11f70afddd670d649aa23ac44cb2a7bca4e57ec9f5f0469d25e1\"" Sep 8 23:57:11.535932 containerd[1485]: time="2025-09-08T23:57:11.535901036Z" level=info msg="CreateContainer within sandbox \"1a64f47c38bbb6905060cb50123e5cb1cd6aebd660637c6f433a0cc7ad51665c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4c663c4dfeefd8e9d9a5045e00c9c187ecc02eb6ed8e493920414541dc1e8a7\"" Sep 8 23:57:11.536254 containerd[1485]: time="2025-09-08T23:57:11.536231037Z" level=info msg="StartContainer for \"a4c663c4dfeefd8e9d9a5045e00c9c187ecc02eb6ed8e493920414541dc1e8a7\"" Sep 8 23:57:11.556617 systemd[1]: Started cri-containerd-68e8625a77cab00e4b4cf4eb2bb79fb52f2763c5c06d6e857555e3c6ebc1b26c.scope - libcontainer container 68e8625a77cab00e4b4cf4eb2bb79fb52f2763c5c06d6e857555e3c6ebc1b26c. Sep 8 23:57:11.566238 systemd[1]: Started cri-containerd-7d3216b7b7aa11f70afddd670d649aa23ac44cb2a7bca4e57ec9f5f0469d25e1.scope - libcontainer container 7d3216b7b7aa11f70afddd670d649aa23ac44cb2a7bca4e57ec9f5f0469d25e1. Sep 8 23:57:11.567825 systemd[1]: Started cri-containerd-a4c663c4dfeefd8e9d9a5045e00c9c187ecc02eb6ed8e493920414541dc1e8a7.scope - libcontainer container a4c663c4dfeefd8e9d9a5045e00c9c187ecc02eb6ed8e493920414541dc1e8a7. Sep 8 23:57:11.597911 containerd[1485]: time="2025-09-08T23:57:11.597778178Z" level=info msg="StartContainer for \"68e8625a77cab00e4b4cf4eb2bb79fb52f2763c5c06d6e857555e3c6ebc1b26c\" returns successfully" Sep 8 23:57:11.605130 kubelet[2160]: E0908 23:57:11.605033 2160 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Sep 8 23:57:11.611216 containerd[1485]: time="2025-09-08T23:57:11.611115792Z" level=info msg="StartContainer for \"a4c663c4dfeefd8e9d9a5045e00c9c187ecc02eb6ed8e493920414541dc1e8a7\" returns successfully" Sep 8 23:57:11.619399 containerd[1485]: time="2025-09-08T23:57:11.619287526Z" level=info msg="StartContainer for \"7d3216b7b7aa11f70afddd670d649aa23ac44cb2a7bca4e57ec9f5f0469d25e1\" returns successfully" Sep 8 23:57:11.770229 kubelet[2160]: I0908 23:57:11.770199 2160 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:57:12.227775 kubelet[2160]: E0908 23:57:12.227741 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:12.229460 kubelet[2160]: E0908 23:57:12.229245 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:12.229460 kubelet[2160]: E0908 23:57:12.229410 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:13.171316 kubelet[2160]: I0908 23:57:13.171147 2160 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 8 23:57:13.171316 kubelet[2160]: E0908 23:57:13.171188 2160 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:57:13.183575 kubelet[2160]: E0908 23:57:13.183524 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:13.232477 kubelet[2160]: E0908 23:57:13.232416 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:13.284335 kubelet[2160]: E0908 23:57:13.284292 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:13.385022 kubelet[2160]: E0908 23:57:13.384970 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:13.485537 kubelet[2160]: E0908 23:57:13.485488 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:13.586304 kubelet[2160]: E0908 23:57:13.586259 2160 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:14.182443 kubelet[2160]: I0908 23:57:14.182383 2160 apiserver.go:52] "Watching apiserver" Sep 8 23:57:14.202474 kubelet[2160]: I0908 23:57:14.202371 2160 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 8 23:57:14.225527 kubelet[2160]: E0908 23:57:14.225480 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:14.232768 kubelet[2160]: E0908 23:57:14.232736 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:14.361981 kubelet[2160]: E0908 23:57:14.361943 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:15.233563 kubelet[2160]: E0908 23:57:15.233521 2160 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:15.388333 systemd[1]: Reload requested from client PID 2444 ('systemctl') (unit session-5.scope)... Sep 8 23:57:15.388348 systemd[1]: Reloading... Sep 8 23:57:15.469138 zram_generator::config[2488]: No configuration found. Sep 8 23:57:15.557734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:57:15.646492 systemd[1]: Reloading finished in 257 ms. Sep 8 23:57:15.669693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:15.682255 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:57:15.682540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:15.682602 systemd[1]: kubelet.service: Consumed 1.028s CPU time, 129.9M memory peak. Sep 8 23:57:15.696481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:57:15.803903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:57:15.807722 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:57:15.847856 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:15.847856 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 8 23:57:15.847856 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:57:15.848249 kubelet[2530]: I0908 23:57:15.847857 2530 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:57:15.855188 kubelet[2530]: I0908 23:57:15.855153 2530 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 8 23:57:15.855188 kubelet[2530]: I0908 23:57:15.855180 2530 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:57:15.855490 kubelet[2530]: I0908 23:57:15.855443 2530 server.go:934] "Client rotation is on, will bootstrap in background" Sep 8 23:57:15.859035 kubelet[2530]: I0908 23:57:15.859009 2530 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:57:15.862053 kubelet[2530]: I0908 23:57:15.862016 2530 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:57:15.867068 kubelet[2530]: E0908 23:57:15.867016 2530 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:57:15.867068 kubelet[2530]: I0908 23:57:15.867051 2530 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:57:15.869587 kubelet[2530]: I0908 23:57:15.869566 2530 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:57:15.869701 kubelet[2530]: I0908 23:57:15.869688 2530 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 8 23:57:15.869810 kubelet[2530]: I0908 23:57:15.869786 2530 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:57:15.869987 kubelet[2530]: I0908 23:57:15.869811 2530 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:57:15.870058 kubelet[2530]: I0908 23:57:15.869995 2530 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:57:15.870058 kubelet[2530]: I0908 23:57:15.870005 2530 container_manager_linux.go:300] "Creating device plugin manager" Sep 8 23:57:15.870058 kubelet[2530]: I0908 23:57:15.870039 2530 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:15.870160 kubelet[2530]: I0908 23:57:15.870146 2530 kubelet.go:408] "Attempting to sync node with API server" Sep 8 23:57:15.870187 kubelet[2530]: I0908 23:57:15.870168 2530 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:57:15.870187 kubelet[2530]: I0908 23:57:15.870186 2530 kubelet.go:314] "Adding apiserver pod source" Sep 8 23:57:15.872271 kubelet[2530]: I0908 23:57:15.872129 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:57:15.873087 kubelet[2530]: I0908 23:57:15.873063 2530 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:57:15.873584 kubelet[2530]: I0908 23:57:15.873565 2530 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.873959 2530 server.go:1274] "Started kubelet" Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.874578 2530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.874814 2530 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.874867 2530 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.875082 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:57:15.876125 kubelet[2530]: I0908 23:57:15.875724 2530 server.go:449] "Adding debug handlers to kubelet server" Sep 8 23:57:15.878596 kubelet[2530]: I0908 23:57:15.876546 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:57:15.879726 kubelet[2530]: I0908 23:57:15.879699 2530 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 8 23:57:15.879987 kubelet[2530]: E0908 23:57:15.879963 2530 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:57:15.882883 kubelet[2530]: I0908 23:57:15.882838 2530 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:57:15.889613 kubelet[2530]: I0908 23:57:15.889453 2530 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 8 23:57:15.890818 kubelet[2530]: I0908 23:57:15.890775 2530 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:57:15.892660 kubelet[2530]: I0908 23:57:15.892627 2530 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:57:15.892660 kubelet[2530]: I0908 23:57:15.892650 2530 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:57:15.895766 kubelet[2530]: I0908 23:57:15.895179 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:57:15.897696 kubelet[2530]: E0908 23:57:15.897668 2530 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:57:15.906310 kubelet[2530]: I0908 23:57:15.904898 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:57:15.906310 kubelet[2530]: I0908 23:57:15.904937 2530 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 8 23:57:15.906310 kubelet[2530]: I0908 23:57:15.904970 2530 kubelet.go:2321] "Starting kubelet main sync loop" Sep 8 23:57:15.906310 kubelet[2530]: E0908 23:57:15.905017 2530 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:57:15.932167 kubelet[2530]: I0908 23:57:15.932141 2530 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 8 23:57:15.932167 kubelet[2530]: I0908 23:57:15.932159 2530 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 8 23:57:15.932314 kubelet[2530]: I0908 23:57:15.932182 2530 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:57:15.932387 kubelet[2530]: I0908 23:57:15.932368 2530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:57:15.932413 kubelet[2530]: I0908 23:57:15.932386 2530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:57:15.932413 kubelet[2530]: I0908 23:57:15.932406 2530 policy_none.go:49] "None policy: Start" Sep 8 23:57:15.933214 kubelet[2530]: I0908 23:57:15.933194 2530 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 8 23:57:15.933283 kubelet[2530]: I0908 23:57:15.933222 2530 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:57:15.933567 kubelet[2530]: I0908 23:57:15.933542 2530 state_mem.go:75] "Updated machine memory state" Sep 8 23:57:15.940481 kubelet[2530]: I0908 23:57:15.940448 2530 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:57:15.940993 kubelet[2530]: I0908 23:57:15.940663 2530 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:57:15.940993 kubelet[2530]: I0908 23:57:15.940684 2530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:57:15.940993 kubelet[2530]: I0908 23:57:15.940878 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:57:16.011819 kubelet[2530]: E0908 23:57:16.011777 2530 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:16.013366 kubelet[2530]: E0908 23:57:16.013341 2530 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.044853 kubelet[2530]: I0908 23:57:16.044825 2530 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 8 23:57:16.050875 kubelet[2530]: I0908 23:57:16.050846 2530 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 8 23:57:16.050991 kubelet[2530]: I0908 23:57:16.050929 2530 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 8 23:57:16.092804 kubelet[2530]: I0908 23:57:16.092763 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.092804 kubelet[2530]: I0908 23:57:16.092809 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.092804 kubelet[2530]: I0908 23:57:16.092830 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.093015 kubelet[2530]: I0908 23:57:16.092864 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.093015 kubelet[2530]: I0908 23:57:16.092881 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.093015 kubelet[2530]: I0908 23:57:16.092901 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:16.093015 kubelet[2530]: I0908 23:57:16.092919 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:57:16.093015 kubelet[2530]: I0908 23:57:16.092966 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:16.093158 kubelet[2530]: I0908 23:57:16.093000 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/afc63b673d4bb8b427c4013c9070c620-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"afc63b673d4bb8b427c4013c9070c620\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:16.312603 kubelet[2530]: E0908 23:57:16.312568 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.313217 kubelet[2530]: E0908 23:57:16.313196 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.313653 kubelet[2530]: E0908 23:57:16.313611 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.873385 kubelet[2530]: I0908 23:57:16.873127 2530 apiserver.go:52] "Watching apiserver" Sep 8 23:57:16.890381 kubelet[2530]: I0908 23:57:16.890328 2530 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 8 23:57:16.924480 kubelet[2530]: E0908 23:57:16.924434 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.942742 kubelet[2530]: E0908 23:57:16.942707 2530 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:57:16.942742 kubelet[2530]: E0908 23:57:16.942715 2530 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:57:16.942913 kubelet[2530]: E0908 23:57:16.942890 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.942913 kubelet[2530]: E0908 23:57:16.942906 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:16.968837 kubelet[2530]: I0908 23:57:16.968721 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.968704266 podStartE2EDuration="2.968704266s" podCreationTimestamp="2025-09-08 23:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:16.957858779 +0000 UTC m=+1.147265232" watchObservedRunningTime="2025-09-08 23:57:16.968704266 +0000 UTC m=+1.158110719" Sep 8 23:57:16.969284 kubelet[2530]: I0908 23:57:16.969149 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.969130892 podStartE2EDuration="969.130892ms" podCreationTimestamp="2025-09-08 23:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:16.968172054 +0000 UTC m=+1.157578507" watchObservedRunningTime="2025-09-08 23:57:16.969130892 +0000 UTC m=+1.158537505" Sep 8 23:57:16.992723 kubelet[2530]: I0908 23:57:16.992667 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.992630433 podStartE2EDuration="2.992630433s" podCreationTimestamp="2025-09-08 23:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:16.978565258 +0000 UTC m=+1.167971671" watchObservedRunningTime="2025-09-08 23:57:16.992630433 +0000 UTC m=+1.182036886" Sep 8 23:57:17.135186 sudo[1623]: pam_unix(sudo:session): session closed for user root Sep 8 23:57:17.136713 sshd[1622]: Connection closed by 10.0.0.1 port 46660 Sep 8 23:57:17.138149 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:17.141195 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:46660.service: Deactivated successfully. Sep 8 23:57:17.143028 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:57:17.143291 systemd[1]: session-5.scope: Consumed 5.023s CPU time, 220.4M memory peak. Sep 8 23:57:17.144297 systemd-logind[1470]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:57:17.145278 systemd-logind[1470]: Removed session 5. Sep 8 23:57:17.926428 kubelet[2530]: E0908 23:57:17.925620 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:17.926428 kubelet[2530]: E0908 23:57:17.926222 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:18.927361 kubelet[2530]: E0908 23:57:18.927287 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:18.974046 kubelet[2530]: E0908 23:57:18.974003 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:20.180858 kubelet[2530]: I0908 23:57:20.180825 2530 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:57:20.181554 kubelet[2530]: I0908 23:57:20.181535 2530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:57:20.181585 containerd[1485]: time="2025-09-08T23:57:20.181308036Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:57:21.110322 systemd[1]: Created slice kubepods-besteffort-pod10a4a38a_8aa9_4474_9bc5_167a3176616d.slice - libcontainer container kubepods-besteffort-pod10a4a38a_8aa9_4474_9bc5_167a3176616d.slice. Sep 8 23:57:21.128539 kubelet[2530]: I0908 23:57:21.127988 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5hw\" (UniqueName: \"kubernetes.io/projected/10a4a38a-8aa9-4474-9bc5-167a3176616d-kube-api-access-7d5hw\") pod \"kube-proxy-g7wbj\" (UID: \"10a4a38a-8aa9-4474-9bc5-167a3176616d\") " pod="kube-system/kube-proxy-g7wbj" Sep 8 23:57:21.128539 kubelet[2530]: I0908 23:57:21.128040 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ec87160c-4dd5-414f-a241-23ec16faad9f-cni-plugin\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128539 kubelet[2530]: I0908 23:57:21.128098 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a4a38a-8aa9-4474-9bc5-167a3176616d-xtables-lock\") pod \"kube-proxy-g7wbj\" (UID: \"10a4a38a-8aa9-4474-9bc5-167a3176616d\") " pod="kube-system/kube-proxy-g7wbj" Sep 8 23:57:21.128539 kubelet[2530]: I0908 23:57:21.128118 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10a4a38a-8aa9-4474-9bc5-167a3176616d-kube-proxy\") pod \"kube-proxy-g7wbj\" (UID: \"10a4a38a-8aa9-4474-9bc5-167a3176616d\") " pod="kube-system/kube-proxy-g7wbj" Sep 8 23:57:21.128539 kubelet[2530]: I0908 23:57:21.128133 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ec87160c-4dd5-414f-a241-23ec16faad9f-cni\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128751 kubelet[2530]: I0908 23:57:21.128148 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec87160c-4dd5-414f-a241-23ec16faad9f-xtables-lock\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128751 kubelet[2530]: I0908 23:57:21.128163 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5bjh\" (UniqueName: \"kubernetes.io/projected/ec87160c-4dd5-414f-a241-23ec16faad9f-kube-api-access-g5bjh\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128751 kubelet[2530]: I0908 23:57:21.128182 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ec87160c-4dd5-414f-a241-23ec16faad9f-run\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128751 kubelet[2530]: I0908 23:57:21.128197 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ec87160c-4dd5-414f-a241-23ec16faad9f-flannel-cfg\") pod \"kube-flannel-ds-d6dgs\" (UID: \"ec87160c-4dd5-414f-a241-23ec16faad9f\") " pod="kube-flannel/kube-flannel-ds-d6dgs" Sep 8 23:57:21.128751 kubelet[2530]: I0908 23:57:21.128241 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a4a38a-8aa9-4474-9bc5-167a3176616d-lib-modules\") pod \"kube-proxy-g7wbj\" (UID: \"10a4a38a-8aa9-4474-9bc5-167a3176616d\") " pod="kube-system/kube-proxy-g7wbj" Sep 8 23:57:21.129306 systemd[1]: Created slice kubepods-burstable-podec87160c_4dd5_414f_a241_23ec16faad9f.slice - libcontainer container kubepods-burstable-podec87160c_4dd5_414f_a241_23ec16faad9f.slice. Sep 8 23:57:21.424362 kubelet[2530]: E0908 23:57:21.423956 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:21.425007 containerd[1485]: time="2025-09-08T23:57:21.424676992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7wbj,Uid:10a4a38a-8aa9-4474-9bc5-167a3176616d,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:21.432015 kubelet[2530]: E0908 23:57:21.431874 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:21.432727 containerd[1485]: time="2025-09-08T23:57:21.432637460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-d6dgs,Uid:ec87160c-4dd5-414f-a241-23ec16faad9f,Namespace:kube-flannel,Attempt:0,}" Sep 8 23:57:21.450384 containerd[1485]: time="2025-09-08T23:57:21.449978731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:21.450384 containerd[1485]: time="2025-09-08T23:57:21.450052195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:21.450384 containerd[1485]: time="2025-09-08T23:57:21.450068200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:21.450838 containerd[1485]: time="2025-09-08T23:57:21.450503539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:21.459664 containerd[1485]: time="2025-09-08T23:57:21.459554556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:21.459664 containerd[1485]: time="2025-09-08T23:57:21.459614856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:21.460415 containerd[1485]: time="2025-09-08T23:57:21.460200483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:21.460415 containerd[1485]: time="2025-09-08T23:57:21.460309638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:21.468289 systemd[1]: Started cri-containerd-c463dbbed087197b26c1cee96665f6a62675eba2c97b3bdeb6e4209eddb354b3.scope - libcontainer container c463dbbed087197b26c1cee96665f6a62675eba2c97b3bdeb6e4209eddb354b3. Sep 8 23:57:21.472889 systemd[1]: Started cri-containerd-c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06.scope - libcontainer container c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06. Sep 8 23:57:21.492497 containerd[1485]: time="2025-09-08T23:57:21.492446765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7wbj,Uid:10a4a38a-8aa9-4474-9bc5-167a3176616d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c463dbbed087197b26c1cee96665f6a62675eba2c97b3bdeb6e4209eddb354b3\"" Sep 8 23:57:21.493630 kubelet[2530]: E0908 23:57:21.493608 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:21.496470 containerd[1485]: time="2025-09-08T23:57:21.496185642Z" level=info msg="CreateContainer within sandbox \"c463dbbed087197b26c1cee96665f6a62675eba2c97b3bdeb6e4209eddb354b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:57:21.507783 containerd[1485]: time="2025-09-08T23:57:21.507732218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-d6dgs,Uid:ec87160c-4dd5-414f-a241-23ec16faad9f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\"" Sep 8 23:57:21.508439 kubelet[2530]: E0908 23:57:21.508407 2530 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:57:21.509698 containerd[1485]: time="2025-09-08T23:57:21.509669318Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Sep 8 23:57:21.515176 containerd[1485]: time="2025-09-08T23:57:21.515129586Z" level=info msg="CreateContainer within sandbox \"c463dbbed087197b26c1cee96665f6a62675eba2c97b3bdeb6e4209eddb354b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"983e6306ce52f83092691545127336ef4b9a49aa592dcaea2e8324440e59d5ac\"" Sep 8 23:57:21.516379 containerd[1485]: time="2025-09-08T23:57:21.515997424Z" level=info msg="StartContainer for \"983e6306ce52f83092691545127336ef4b9a49aa592dcaea2e8324440e59d5ac\"" Sep 8 23:57:21.548294 systemd[1]: Started cri-containerd-983e6306ce52f83092691545127336ef4b9a49aa592dcaea2e8324440e59d5ac.scope - libcontainer container 983e6306ce52f83092691545127336ef4b9a49aa592dcaea2e8324440e59d5ac. Sep 8 23:57:21.572505 containerd[1485]: time="2025-09-08T23:57:21.572467420Z" level=info msg="StartContainer for \"983e6306ce52f83092691545127336ef4b9a49aa592dcaea2e8324440e59d5ac\" returns successfully" Sep 8 23:57:21.942946 kubelet[2530]: I0908 23:57:21.942875 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7wbj" podStartSLOduration=0.942857541 podStartE2EDuration="942.857541ms" podCreationTimestamp="2025-09-08 23:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:21.942748146 +0000 UTC m=+6.132154599" watchObservedRunningTime="2025-09-08 23:57:21.942857541 +0000 UTC m=+6.132263954" Sep 8 23:57:22.702069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3671363603.mount: Deactivated successfully. Sep 8 23:57:22.730084 containerd[1485]: time="2025-09-08T23:57:22.730041436Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:22.730631 containerd[1485]: time="2025-09-08T23:57:22.730589673Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Sep 8 23:57:22.731484 containerd[1485]: time="2025-09-08T23:57:22.731454801Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:22.733964 containerd[1485]: time="2025-09-08T23:57:22.733612419Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:22.734559 containerd[1485]: time="2025-09-08T23:57:22.734534564Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.224827874s" Sep 8 23:57:22.734654 containerd[1485]: time="2025-09-08T23:57:22.734588179Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Sep 8 23:57:22.736960 containerd[1485]: time="2025-09-08T23:57:22.736852868Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Sep 8 23:57:22.750654 containerd[1485]: time="2025-09-08T23:57:22.750609529Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e\"" Sep 8 23:57:22.751116 containerd[1485]: time="2025-09-08T23:57:22.751079744Z" level=info msg="StartContainer for \"be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e\"" Sep 8 23:57:22.775263 systemd[1]: Started cri-containerd-be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e.scope - libcontainer container be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e. Sep 8 23:57:22.802272 systemd[1]: cri-containerd-be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e.scope: Deactivated successfully. Sep 8 23:57:22.804392 containerd[1485]: time="2025-09-08T23:57:22.804277785Z" level=info msg="StartContainer for \"be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e\" returns successfully" Sep 8 23:57:22.841122 containerd[1485]: time="2025-09-08T23:57:22.841049240Z" level=info msg="shim disconnected" id=be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e namespace=k8s.io Sep 8 23:57:22.841122 containerd[1485]: time="2025-09-08T23:57:22.841117179Z" level=warning msg="cleaning up after shim disconnected" id=be16398da095f9efbe8f19b19397c988103e71bb810a3583f28260958701309e namespace=k8s.io Sep 8 23:57:22.841122 containerd[1485]: time="2025-09-08T23:57:22.841125861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:57:22.938021 containerd[1485]: time="2025-09-08T23:57:22.937981250Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Sep 8 23:57:24.109176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount156145512.mount: Deactivated successfully. Sep 8 23:57:26.845960 containerd[1485]: time="2025-09-08T23:57:26.845913277Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:26.846957 containerd[1485]: time="2025-09-08T23:57:26.846737546Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Sep 8 23:57:26.850007 containerd[1485]: time="2025-09-08T23:57:26.847682722Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:26.852389 containerd[1485]: time="2025-09-08T23:57:26.852342187Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:57:26.853655 containerd[1485]: time="2025-09-08T23:57:26.853621439Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.915600018s" Sep 8 23:57:26.853754 containerd[1485]: time="2025-09-08T23:57:26.853738506Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Sep 8 23:57:26.857026 containerd[1485]: time="2025-09-08T23:57:26.856993050Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 8 23:57:26.899441 containerd[1485]: time="2025-09-08T23:57:26.899282196Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4\"" Sep 8 23:57:26.904291 containerd[1485]: time="2025-09-08T23:57:26.904216163Z" level=info msg="StartContainer for \"f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4\"" Sep 8 23:57:26.936296 systemd[1]: Started cri-containerd-f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4.scope - libcontainer container f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4. Sep 8 23:57:26.959490 systemd[1]: cri-containerd-f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4.scope: Deactivated successfully. Sep 8 23:57:26.961786 containerd[1485]: time="2025-09-08T23:57:26.961653412Z" level=info msg="StartContainer for \"f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4\" returns successfully" Sep 8 23:57:27.055563 kubelet[2530]: I0908 23:57:27.055159 2530 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 8 23:57:27.087051 containerd[1485]: time="2025-09-08T23:57:27.086981602Z" level=info msg="shim disconnected" id=f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4 namespace=k8s.io Sep 8 23:57:27.087051 containerd[1485]: time="2025-09-08T23:57:27.087037574Z" level=warning msg="cleaning up after shim disconnected" id=f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4 namespace=k8s.io Sep 8 23:57:27.087051 containerd[1485]: time="2025-09-08T23:57:27.087046456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:57:27.103157 systemd[1]: Created slice kubepods-burstable-pod2b55aa5f_ba41_4eac_b312_e31ad2dc27de.slice - libcontainer container kubepods-burstable-pod2b55aa5f_ba41_4eac_b312_e31ad2dc27de.slice. Sep 8 23:57:27.111345 systemd[1]: Created slice kubepods-burstable-poda9c6a1c6_375b_4a1b_aa60_eff423de26e8.slice - libcontainer container kubepods-burstable-poda9c6a1c6_375b_4a1b_aa60_eff423de26e8.slice. Sep 8 23:57:27.169266 kubelet[2530]: I0908 23:57:27.169206 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9c6a1c6-375b-4a1b-aa60-eff423de26e8-config-volume\") pod \"coredns-7c65d6cfc9-mjg8c\" (UID: \"a9c6a1c6-375b-4a1b-aa60-eff423de26e8\") " pod="kube-system/coredns-7c65d6cfc9-mjg8c" Sep 8 23:57:27.169266 kubelet[2530]: I0908 23:57:27.169257 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4krm\" (UniqueName: \"kubernetes.io/projected/a9c6a1c6-375b-4a1b-aa60-eff423de26e8-kube-api-access-q4krm\") pod \"coredns-7c65d6cfc9-mjg8c\" (UID: \"a9c6a1c6-375b-4a1b-aa60-eff423de26e8\") " pod="kube-system/coredns-7c65d6cfc9-mjg8c" Sep 8 23:57:27.169266 kubelet[2530]: I0908 23:57:27.169278 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqgzp\" (UniqueName: \"kubernetes.io/projected/2b55aa5f-ba41-4eac-b312-e31ad2dc27de-kube-api-access-qqgzp\") pod \"coredns-7c65d6cfc9-xdz8s\" (UID: \"2b55aa5f-ba41-4eac-b312-e31ad2dc27de\") " pod="kube-system/coredns-7c65d6cfc9-xdz8s" Sep 8 23:57:27.169442 kubelet[2530]: I0908 23:57:27.169295 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b55aa5f-ba41-4eac-b312-e31ad2dc27de-config-volume\") pod \"coredns-7c65d6cfc9-xdz8s\" (UID: \"2b55aa5f-ba41-4eac-b312-e31ad2dc27de\") " pod="kube-system/coredns-7c65d6cfc9-xdz8s" Sep 8 23:57:27.408993 containerd[1485]: time="2025-09-08T23:57:27.408871578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdz8s,Uid:2b55aa5f-ba41-4eac-b312-e31ad2dc27de,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:27.415590 containerd[1485]: time="2025-09-08T23:57:27.415551063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjg8c,Uid:a9c6a1c6-375b-4a1b-aa60-eff423de26e8,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:27.447860 containerd[1485]: time="2025-09-08T23:57:27.447797637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdz8s,Uid:2b55aa5f-ba41-4eac-b312-e31ad2dc27de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea9b56df9a0bcea7b29f2c06007b4fe924bce8729a4be4bb7a381b743f895121\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:57:27.448096 kubelet[2530]: E0908 23:57:27.448050 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9b56df9a0bcea7b29f2c06007b4fe924bce8729a4be4bb7a381b743f895121\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:57:27.448162 kubelet[2530]: E0908 23:57:27.448138 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9b56df9a0bcea7b29f2c06007b4fe924bce8729a4be4bb7a381b743f895121\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-xdz8s" Sep 8 23:57:27.448204 kubelet[2530]: E0908 23:57:27.448168 2530 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9b56df9a0bcea7b29f2c06007b4fe924bce8729a4be4bb7a381b743f895121\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-xdz8s" Sep 8 23:57:27.448249 kubelet[2530]: E0908 23:57:27.448215 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xdz8s_kube-system(2b55aa5f-ba41-4eac-b312-e31ad2dc27de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xdz8s_kube-system(2b55aa5f-ba41-4eac-b312-e31ad2dc27de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea9b56df9a0bcea7b29f2c06007b4fe924bce8729a4be4bb7a381b743f895121\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-xdz8s" podUID="2b55aa5f-ba41-4eac-b312-e31ad2dc27de" Sep 8 23:57:27.448507 containerd[1485]: time="2025-09-08T23:57:27.448426253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjg8c,Uid:a9c6a1c6-375b-4a1b-aa60-eff423de26e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49afa28c34df6866f48027f9560fb82ca5e6a9d24108b0f0d08212f52054a27d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:57:27.448633 kubelet[2530]: E0908 23:57:27.448572 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49afa28c34df6866f48027f9560fb82ca5e6a9d24108b0f0d08212f52054a27d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:57:27.448633 kubelet[2530]: E0908 23:57:27.448606 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49afa28c34df6866f48027f9560fb82ca5e6a9d24108b0f0d08212f52054a27d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-mjg8c" Sep 8 23:57:27.448708 kubelet[2530]: E0908 23:57:27.448633 2530 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49afa28c34df6866f48027f9560fb82ca5e6a9d24108b0f0d08212f52054a27d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-mjg8c" Sep 8 23:57:27.448708 kubelet[2530]: E0908 23:57:27.448676 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mjg8c_kube-system(a9c6a1c6-375b-4a1b-aa60-eff423de26e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mjg8c_kube-system(a9c6a1c6-375b-4a1b-aa60-eff423de26e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49afa28c34df6866f48027f9560fb82ca5e6a9d24108b0f0d08212f52054a27d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-mjg8c" podUID="a9c6a1c6-375b-4a1b-aa60-eff423de26e8" Sep 8 23:57:27.898505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f971c3c237f40431ad8625df31eaac878eed72c943a7ff8d096ba34c62edebc4-rootfs.mount: Deactivated successfully. Sep 8 23:57:27.953848 containerd[1485]: time="2025-09-08T23:57:27.952488308Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Sep 8 23:57:27.977575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754333562.mount: Deactivated successfully. Sep 8 23:57:27.983653 containerd[1485]: time="2025-09-08T23:57:27.983547585Z" level=info msg="CreateContainer within sandbox \"c26fdf73da2b46787c17a82dc0db463bb33685d1c31aca671a382f6a33065d06\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"8f4033a7d0b1b8263744911c4a0db3a457ea7d7ab6e0df99e552e5b0b6171b2d\"" Sep 8 23:57:27.984504 containerd[1485]: time="2025-09-08T23:57:27.984475946Z" level=info msg="StartContainer for \"8f4033a7d0b1b8263744911c4a0db3a457ea7d7ab6e0df99e552e5b0b6171b2d\"" Sep 8 23:57:28.015260 systemd[1]: Started cri-containerd-8f4033a7d0b1b8263744911c4a0db3a457ea7d7ab6e0df99e552e5b0b6171b2d.scope - libcontainer container 8f4033a7d0b1b8263744911c4a0db3a457ea7d7ab6e0df99e552e5b0b6171b2d. Sep 8 23:57:28.039151 containerd[1485]: time="2025-09-08T23:57:28.039083678Z" level=info msg="StartContainer for \"8f4033a7d0b1b8263744911c4a0db3a457ea7d7ab6e0df99e552e5b0b6171b2d\" returns successfully" Sep 8 23:57:29.011697 kubelet[2530]: I0908 23:57:29.011630 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-d6dgs" podStartSLOduration=2.66624896 podStartE2EDuration="8.011613729s" podCreationTimestamp="2025-09-08 23:57:21 +0000 UTC" firstStartedPulling="2025-09-08 23:57:21.509185883 +0000 UTC m=+5.698592336" lastFinishedPulling="2025-09-08 23:57:26.854550652 +0000 UTC m=+11.043957105" observedRunningTime="2025-09-08 23:57:28.966832153 +0000 UTC m=+13.156238606" watchObservedRunningTime="2025-09-08 23:57:29.011613729 +0000 UTC m=+13.201020182" Sep 8 23:57:29.112424 systemd-networkd[1409]: flannel.1: Link UP Sep 8 23:57:29.112430 systemd-networkd[1409]: flannel.1: Gained carrier Sep 8 23:57:30.794682 systemd-networkd[1409]: flannel.1: Gained IPv6LL Sep 8 23:57:35.107026 update_engine[1472]: I20250908 23:57:35.106465 1472 update_attempter.cc:509] Updating boot flags... Sep 8 23:57:35.142759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3210) Sep 8 23:57:35.193119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3211) Sep 8 23:57:37.907020 containerd[1485]: time="2025-09-08T23:57:37.906952848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjg8c,Uid:a9c6a1c6-375b-4a1b-aa60-eff423de26e8,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:37.929249 systemd-networkd[1409]: cni0: Link UP Sep 8 23:57:37.929255 systemd-networkd[1409]: cni0: Gained carrier Sep 8 23:57:37.932585 systemd-networkd[1409]: cni0: Lost carrier Sep 8 23:57:37.935988 systemd-networkd[1409]: veth92ad54b8: Link UP Sep 8 23:57:37.937277 kernel: cni0: port 1(veth92ad54b8) entered blocking state Sep 8 23:57:37.937343 kernel: cni0: port 1(veth92ad54b8) entered disabled state Sep 8 23:57:37.937361 kernel: veth92ad54b8: entered allmulticast mode Sep 8 23:57:37.938190 kernel: veth92ad54b8: entered promiscuous mode Sep 8 23:57:37.939219 kernel: cni0: port 1(veth92ad54b8) entered blocking state Sep 8 23:57:37.939249 kernel: cni0: port 1(veth92ad54b8) entered forwarding state Sep 8 23:57:37.944112 kernel: cni0: port 1(veth92ad54b8) entered disabled state Sep 8 23:57:37.950606 kernel: cni0: port 1(veth92ad54b8) entered blocking state Sep 8 23:57:37.950699 kernel: cni0: port 1(veth92ad54b8) entered forwarding state Sep 8 23:57:37.950648 systemd-networkd[1409]: veth92ad54b8: Gained carrier Sep 8 23:57:37.951222 systemd-networkd[1409]: cni0: Gained carrier Sep 8 23:57:37.953323 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Sep 8 23:57:37.953323 containerd[1485]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:57:37.973751 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:57:37.973624221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:37.973751 containerd[1485]: time="2025-09-08T23:57:37.973706711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:37.973751 containerd[1485]: time="2025-09-08T23:57:37.973717913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:37.973937 containerd[1485]: time="2025-09-08T23:57:37.973828047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:38.001326 systemd[1]: Started cri-containerd-1a51534097fcad3be6b426ab33060d6c4fb1c4bedf082682f09eed05e642ae3b.scope - libcontainer container 1a51534097fcad3be6b426ab33060d6c4fb1c4bedf082682f09eed05e642ae3b. Sep 8 23:57:38.012833 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:57:38.029497 containerd[1485]: time="2025-09-08T23:57:38.029450181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mjg8c,Uid:a9c6a1c6-375b-4a1b-aa60-eff423de26e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a51534097fcad3be6b426ab33060d6c4fb1c4bedf082682f09eed05e642ae3b\"" Sep 8 23:57:38.031976 containerd[1485]: time="2025-09-08T23:57:38.031939246Z" level=info msg="CreateContainer within sandbox \"1a51534097fcad3be6b426ab33060d6c4fb1c4bedf082682f09eed05e642ae3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:57:38.040670 containerd[1485]: time="2025-09-08T23:57:38.040626911Z" level=info msg="CreateContainer within sandbox \"1a51534097fcad3be6b426ab33060d6c4fb1c4bedf082682f09eed05e642ae3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2ae5af81640bffccfd86eb6940ceb6ae2cd7b22dd3a4327d13477abbe7271f3\"" Sep 8 23:57:38.041006 containerd[1485]: time="2025-09-08T23:57:38.040988115Z" level=info msg="StartContainer for \"f2ae5af81640bffccfd86eb6940ceb6ae2cd7b22dd3a4327d13477abbe7271f3\"" Sep 8 23:57:38.064241 systemd[1]: Started cri-containerd-f2ae5af81640bffccfd86eb6940ceb6ae2cd7b22dd3a4327d13477abbe7271f3.scope - libcontainer container f2ae5af81640bffccfd86eb6940ceb6ae2cd7b22dd3a4327d13477abbe7271f3. Sep 8 23:57:38.090620 containerd[1485]: time="2025-09-08T23:57:38.090581512Z" level=info msg="StartContainer for \"f2ae5af81640bffccfd86eb6940ceb6ae2cd7b22dd3a4327d13477abbe7271f3\" returns successfully" Sep 8 23:57:38.986515 systemd-networkd[1409]: cni0: Gained IPv6LL Sep 8 23:57:39.000792 kubelet[2530]: I0908 23:57:39.000018 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mjg8c" podStartSLOduration=17.999997464 podStartE2EDuration="17.999997464s" podCreationTimestamp="2025-09-08 23:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:38.988020636 +0000 UTC m=+23.177427089" watchObservedRunningTime="2025-09-08 23:57:38.999997464 +0000 UTC m=+23.189403917" Sep 8 23:57:39.626238 systemd-networkd[1409]: veth92ad54b8: Gained IPv6LL Sep 8 23:57:39.906539 containerd[1485]: time="2025-09-08T23:57:39.906223015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdz8s,Uid:2b55aa5f-ba41-4eac-b312-e31ad2dc27de,Namespace:kube-system,Attempt:0,}" Sep 8 23:57:39.924171 systemd-networkd[1409]: vethdb330794: Link UP Sep 8 23:57:39.925560 kernel: cni0: port 2(vethdb330794) entered blocking state Sep 8 23:57:39.925849 kernel: cni0: port 2(vethdb330794) entered disabled state Sep 8 23:57:39.925905 kernel: vethdb330794: entered allmulticast mode Sep 8 23:57:39.927106 kernel: vethdb330794: entered promiscuous mode Sep 8 23:57:39.934335 kernel: cni0: port 2(vethdb330794) entered blocking state Sep 8 23:57:39.934625 kernel: cni0: port 2(vethdb330794) entered forwarding state Sep 8 23:57:39.934433 systemd-networkd[1409]: vethdb330794: Gained carrier Sep 8 23:57:39.940544 containerd[1485]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a68e8), "name":"cbr0", "type":"bridge"} Sep 8 23:57:39.940544 containerd[1485]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:57:39.987864 containerd[1485]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:57:39.987576242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:57:39.987864 containerd[1485]: time="2025-09-08T23:57:39.987655532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:57:39.987864 containerd[1485]: time="2025-09-08T23:57:39.987685215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:39.990372 containerd[1485]: time="2025-09-08T23:57:39.988686972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:57:40.009274 systemd[1]: Started cri-containerd-39106512b5e49818f15f89aa3406c8421d24c1e4787e8debd58ab60b3d9a11ec.scope - libcontainer container 39106512b5e49818f15f89aa3406c8421d24c1e4787e8debd58ab60b3d9a11ec. Sep 8 23:57:40.020044 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:57:40.036778 containerd[1485]: time="2025-09-08T23:57:40.036737398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdz8s,Uid:2b55aa5f-ba41-4eac-b312-e31ad2dc27de,Namespace:kube-system,Attempt:0,} returns sandbox id \"39106512b5e49818f15f89aa3406c8421d24c1e4787e8debd58ab60b3d9a11ec\"" Sep 8 23:57:40.039540 containerd[1485]: time="2025-09-08T23:57:40.039500946Z" level=info msg="CreateContainer within sandbox \"39106512b5e49818f15f89aa3406c8421d24c1e4787e8debd58ab60b3d9a11ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:57:40.089414 containerd[1485]: time="2025-09-08T23:57:40.089365029Z" level=info msg="CreateContainer within sandbox \"39106512b5e49818f15f89aa3406c8421d24c1e4787e8debd58ab60b3d9a11ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da703a7169a54ae82773f2def70959b3742ef69b34a56ca67b5030639b1a46d5\"" Sep 8 23:57:40.090324 containerd[1485]: time="2025-09-08T23:57:40.090287692Z" level=info msg="StartContainer for \"da703a7169a54ae82773f2def70959b3742ef69b34a56ca67b5030639b1a46d5\"" Sep 8 23:57:40.095972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161330526.mount: Deactivated successfully. Sep 8 23:57:40.121326 systemd[1]: Started cri-containerd-da703a7169a54ae82773f2def70959b3742ef69b34a56ca67b5030639b1a46d5.scope - libcontainer container da703a7169a54ae82773f2def70959b3742ef69b34a56ca67b5030639b1a46d5. Sep 8 23:57:40.161835 containerd[1485]: time="2025-09-08T23:57:40.161244887Z" level=info msg="StartContainer for \"da703a7169a54ae82773f2def70959b3742ef69b34a56ca67b5030639b1a46d5\" returns successfully" Sep 8 23:57:40.727344 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:37308.service - OpenSSH per-connection server daemon (10.0.0.1:37308). Sep 8 23:57:40.773594 sshd[3478]: Accepted publickey for core from 10.0.0.1 port 37308 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:40.775019 sshd-session[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:40.779083 systemd-logind[1470]: New session 6 of user core. Sep 8 23:57:40.788292 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:57:40.938034 sshd[3480]: Connection closed by 10.0.0.1 port 37308 Sep 8 23:57:40.938729 sshd-session[3478]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:40.942182 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:37308.service: Deactivated successfully. Sep 8 23:57:40.943839 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:57:40.945392 systemd-logind[1470]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:57:40.946913 systemd-logind[1470]: Removed session 6. Sep 8 23:57:41.418259 systemd-networkd[1409]: vethdb330794: Gained IPv6LL Sep 8 23:57:45.954829 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Sep 8 23:57:46.018523 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:46.019295 sshd-session[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:46.025197 systemd-logind[1470]: New session 7 of user core. Sep 8 23:57:46.035340 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:57:46.164149 sshd[3521]: Connection closed by 10.0.0.1 port 37322 Sep 8 23:57:46.164064 sshd-session[3518]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:46.168239 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:37322.service: Deactivated successfully. Sep 8 23:57:46.173300 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:57:46.174667 systemd-logind[1470]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:57:46.175476 systemd-logind[1470]: Removed session 7. Sep 8 23:57:47.432798 kubelet[2530]: I0908 23:57:47.431463 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xdz8s" podStartSLOduration=26.431446179 podStartE2EDuration="26.431446179s" podCreationTimestamp="2025-09-08 23:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:57:41.014039872 +0000 UTC m=+25.203446325" watchObservedRunningTime="2025-09-08 23:57:47.431446179 +0000 UTC m=+31.620852632" Sep 8 23:57:51.184761 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:48002.service - OpenSSH per-connection server daemon (10.0.0.1:48002). Sep 8 23:57:51.226859 sshd[3560]: Accepted publickey for core from 10.0.0.1 port 48002 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:51.228318 sshd-session[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:51.233839 systemd-logind[1470]: New session 8 of user core. Sep 8 23:57:51.252646 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:57:51.373033 sshd[3562]: Connection closed by 10.0.0.1 port 48002 Sep 8 23:57:51.373607 sshd-session[3560]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:51.384596 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:48002.service: Deactivated successfully. Sep 8 23:57:51.386288 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:57:51.388709 systemd-logind[1470]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:57:51.400676 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Sep 8 23:57:51.402336 systemd-logind[1470]: Removed session 8. Sep 8 23:57:51.439353 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:51.440606 sshd-session[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:51.446182 systemd-logind[1470]: New session 9 of user core. Sep 8 23:57:51.452313 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:57:51.601838 sshd[3578]: Connection closed by 10.0.0.1 port 48004 Sep 8 23:57:51.603071 sshd-session[3575]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:51.620865 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:48004.service: Deactivated successfully. Sep 8 23:57:51.626188 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:57:51.636061 systemd-logind[1470]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:57:51.648710 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:48010.service - OpenSSH per-connection server daemon (10.0.0.1:48010). Sep 8 23:57:51.649792 systemd-logind[1470]: Removed session 9. Sep 8 23:57:51.690515 sshd[3589]: Accepted publickey for core from 10.0.0.1 port 48010 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:51.691871 sshd-session[3589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:51.696630 systemd-logind[1470]: New session 10 of user core. Sep 8 23:57:51.703305 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:57:51.816554 sshd[3592]: Connection closed by 10.0.0.1 port 48010 Sep 8 23:57:51.816909 sshd-session[3589]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:51.820632 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:48010.service: Deactivated successfully. Sep 8 23:57:51.822338 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:57:51.825263 systemd-logind[1470]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:57:51.827956 systemd-logind[1470]: Removed session 10. Sep 8 23:57:56.832776 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:48012.service - OpenSSH per-connection server daemon (10.0.0.1:48012). Sep 8 23:57:56.876543 sshd[3629]: Accepted publickey for core from 10.0.0.1 port 48012 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:56.877668 sshd-session[3629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:56.883966 systemd-logind[1470]: New session 11 of user core. Sep 8 23:57:56.896322 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:57:57.010770 sshd[3631]: Connection closed by 10.0.0.1 port 48012 Sep 8 23:57:57.010622 sshd-session[3629]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:57.036465 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:48012.service: Deactivated successfully. Sep 8 23:57:57.038150 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:57:57.043123 systemd-logind[1470]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:57:57.055466 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:48016.service - OpenSSH per-connection server daemon (10.0.0.1:48016). Sep 8 23:57:57.056820 systemd-logind[1470]: Removed session 11. Sep 8 23:57:57.098057 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:57.099869 sshd-session[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:57.104676 systemd-logind[1470]: New session 12 of user core. Sep 8 23:57:57.121291 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:57:57.292917 sshd[3646]: Connection closed by 10.0.0.1 port 48016 Sep 8 23:57:57.293537 sshd-session[3643]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:57.304070 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:48016.service: Deactivated successfully. Sep 8 23:57:57.305727 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:57:57.306519 systemd-logind[1470]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:57:57.313340 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:48030.service - OpenSSH per-connection server daemon (10.0.0.1:48030). Sep 8 23:57:57.314281 systemd-logind[1470]: Removed session 12. Sep 8 23:57:57.353564 sshd[3656]: Accepted publickey for core from 10.0.0.1 port 48030 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:57.354848 sshd-session[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:57.359169 systemd-logind[1470]: New session 13 of user core. Sep 8 23:57:57.370247 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:57:58.479071 sshd[3659]: Connection closed by 10.0.0.1 port 48030 Sep 8 23:57:58.480739 sshd-session[3656]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:58.493762 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:48030.service: Deactivated successfully. Sep 8 23:57:58.498444 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:57:58.500111 systemd-logind[1470]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:57:58.515986 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:48034.service - OpenSSH per-connection server daemon (10.0.0.1:48034). Sep 8 23:57:58.519681 systemd-logind[1470]: Removed session 13. Sep 8 23:57:58.559736 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:58.561353 sshd-session[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:58.565924 systemd-logind[1470]: New session 14 of user core. Sep 8 23:57:58.576307 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:57:58.794746 sshd[3680]: Connection closed by 10.0.0.1 port 48034 Sep 8 23:57:58.795830 sshd-session[3677]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:58.810079 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:48038.service - OpenSSH per-connection server daemon (10.0.0.1:48038). Sep 8 23:57:58.811315 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:48034.service: Deactivated successfully. Sep 8 23:57:58.814174 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:57:58.816674 systemd-logind[1470]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:57:58.817776 systemd-logind[1470]: Removed session 14. Sep 8 23:57:58.856685 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:57:58.858102 sshd-session[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:57:58.865467 systemd-logind[1470]: New session 15 of user core. Sep 8 23:57:58.874326 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:57:58.994149 sshd[3694]: Connection closed by 10.0.0.1 port 48038 Sep 8 23:57:58.995385 sshd-session[3689]: pam_unix(sshd:session): session closed for user core Sep 8 23:57:58.998107 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:48038.service: Deactivated successfully. Sep 8 23:57:59.000379 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:57:59.001941 systemd-logind[1470]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:57:59.002873 systemd-logind[1470]: Removed session 15. Sep 8 23:58:04.006483 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:46826.service - OpenSSH per-connection server daemon (10.0.0.1:46826). Sep 8 23:58:04.065129 sshd[3731]: Accepted publickey for core from 10.0.0.1 port 46826 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:58:04.066537 sshd-session[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:04.070378 systemd-logind[1470]: New session 16 of user core. Sep 8 23:58:04.083283 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:58:04.200130 sshd[3733]: Connection closed by 10.0.0.1 port 46826 Sep 8 23:58:04.201519 sshd-session[3731]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:04.205803 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:46826.service: Deactivated successfully. Sep 8 23:58:04.207576 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:58:04.211242 systemd-logind[1470]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:58:04.212695 systemd-logind[1470]: Removed session 16. Sep 8 23:58:09.213958 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:46834.service - OpenSSH per-connection server daemon (10.0.0.1:46834). Sep 8 23:58:09.256907 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 46834 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:58:09.258344 sshd-session[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:09.263290 systemd-logind[1470]: New session 17 of user core. Sep 8 23:58:09.271313 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:58:09.385114 sshd[3775]: Connection closed by 10.0.0.1 port 46834 Sep 8 23:58:09.385483 sshd-session[3767]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:09.388316 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:46834.service: Deactivated successfully. Sep 8 23:58:09.389936 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:58:09.391384 systemd-logind[1470]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:58:09.392378 systemd-logind[1470]: Removed session 17. Sep 8 23:58:14.398793 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:39888.service - OpenSSH per-connection server daemon (10.0.0.1:39888). Sep 8 23:58:14.443958 sshd[3829]: Accepted publickey for core from 10.0.0.1 port 39888 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:58:14.445431 sshd-session[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:58:14.449962 systemd-logind[1470]: New session 18 of user core. Sep 8 23:58:14.456328 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:58:14.571946 sshd[3831]: Connection closed by 10.0.0.1 port 39888 Sep 8 23:58:14.572348 sshd-session[3829]: pam_unix(sshd:session): session closed for user core Sep 8 23:58:14.576713 systemd-logind[1470]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:58:14.576935 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:39888.service: Deactivated successfully. Sep 8 23:58:14.578786 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:58:14.579682 systemd-logind[1470]: Removed session 18.