Sep 8 23:53:58.853242 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:53:58.853265 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Sep 8 22:15:05 -00 2025 Sep 8 23:53:58.853274 kernel: KASLR enabled Sep 8 23:53:58.853280 kernel: efi: EFI v2.7 by EDK II Sep 8 23:53:58.853286 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 8 23:53:58.853291 kernel: random: crng init done Sep 8 23:53:58.853298 kernel: secureboot: Secure boot disabled Sep 8 23:53:58.853304 kernel: ACPI: Early table checksum verification disabled Sep 8 23:53:58.853310 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 8 23:53:58.853317 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:53:58.853324 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853329 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853335 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853347 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853356 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853364 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853371 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853377 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853384 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:53:58.853390 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:53:58.853396 kernel: NUMA: Failed to initialise from firmware Sep 8 23:53:58.853402 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:58.853408 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 8 23:53:58.853414 kernel: Zone ranges: Sep 8 23:53:58.853420 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:58.853428 kernel: DMA32 empty Sep 8 23:53:58.853434 kernel: Normal empty Sep 8 23:53:58.853440 kernel: Movable zone start for each node Sep 8 23:53:58.853446 kernel: Early memory node ranges Sep 8 23:53:58.853452 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 8 23:53:58.853458 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 8 23:53:58.853464 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 8 23:53:58.853470 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 8 23:53:58.853476 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 8 23:53:58.853482 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:53:58.853488 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:53:58.853494 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:53:58.853502 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:53:58.853509 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:53:58.853515 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:53:58.853527 kernel: psci: probing for conduit method from ACPI. Sep 8 23:53:58.853533 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:53:58.853540 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:53:58.853548 kernel: psci: Trusted OS migration not required Sep 8 23:53:58.853554 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:53:58.853561 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:53:58.853567 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 8 23:53:58.853574 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 8 23:53:58.853581 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:53:58.853587 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:53:58.853594 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:53:58.853600 kernel: CPU features: detected: Hardware dirty bit management Sep 8 23:53:58.853607 kernel: CPU features: detected: Spectre-v4 Sep 8 23:53:58.853620 kernel: CPU features: detected: Spectre-BHB Sep 8 23:53:58.853628 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:53:58.853635 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:53:58.853641 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:53:58.853648 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:53:58.853654 kernel: alternatives: applying boot alternatives Sep 8 23:53:58.853662 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:53:58.853669 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:53:58.853675 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:53:58.853682 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:53:58.853688 kernel: Fallback order for Node 0: 0 Sep 8 23:53:58.853697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 8 23:53:58.853703 kernel: Policy zone: DMA Sep 8 23:53:58.853710 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:53:58.853716 kernel: software IO TLB: area num 4. Sep 8 23:53:58.853723 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 8 23:53:58.853730 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Sep 8 23:53:58.853736 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:53:58.853743 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:53:58.853750 kernel: rcu: RCU event tracing is enabled. Sep 8 23:53:58.853757 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:53:58.853764 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:53:58.853770 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:53:58.853779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:53:58.853785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:53:58.853792 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:53:58.853798 kernel: GICv3: 256 SPIs implemented Sep 8 23:53:58.853804 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:53:58.853811 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:53:58.853817 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:53:58.853824 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:53:58.853830 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:53:58.853861 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:53:58.853878 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:53:58.853886 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 8 23:53:58.853893 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 8 23:53:58.853899 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:53:58.853906 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:58.853912 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:53:58.853919 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:53:58.853926 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:53:58.853932 kernel: arm-pv: using stolen time PV Sep 8 23:53:58.853939 kernel: Console: colour dummy device 80x25 Sep 8 23:53:58.853946 kernel: ACPI: Core revision 20230628 Sep 8 23:53:58.853953 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:53:58.853961 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:53:58.853968 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:53:58.853975 kernel: landlock: Up and running. Sep 8 23:53:58.853981 kernel: SELinux: Initializing. Sep 8 23:53:58.853988 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:53:58.853994 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:53:58.854001 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:53:58.854008 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:53:58.854014 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:53:58.854023 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:53:58.854029 kernel: Platform MSI: ITS@0x8080000 domain created Sep 8 23:53:58.854036 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 8 23:53:58.854043 kernel: Remapping and enabling EFI services. Sep 8 23:53:58.854049 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:53:58.854056 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:53:58.854063 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:53:58.854069 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 8 23:53:58.854076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:58.854084 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:53:58.854091 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:53:58.854103 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:53:58.854112 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 8 23:53:58.854119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:58.854126 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:53:58.854133 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:53:58.854148 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:53:58.854156 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 8 23:53:58.854165 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:53:58.854172 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:53:58.854179 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:53:58.854186 kernel: SMP: Total of 4 processors activated. Sep 8 23:53:58.854193 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:53:58.854200 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:53:58.854207 kernel: CPU features: detected: Common not Private translations Sep 8 23:53:58.854214 kernel: CPU features: detected: CRC32 instructions Sep 8 23:53:58.854220 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:53:58.854229 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:53:58.854236 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:53:58.854243 kernel: CPU features: detected: Privileged Access Never Sep 8 23:53:58.854250 kernel: CPU features: detected: RAS Extension Support Sep 8 23:53:58.854257 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:53:58.854264 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:53:58.854271 kernel: alternatives: applying system-wide alternatives Sep 8 23:53:58.854277 kernel: devtmpfs: initialized Sep 8 23:53:58.854285 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:53:58.854293 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:53:58.854300 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:53:58.854307 kernel: SMBIOS 3.0.0 present. Sep 8 23:53:58.854314 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:53:58.854321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:53:58.854333 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:53:58.854340 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:53:58.854347 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:53:58.854356 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:53:58.854363 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 8 23:53:58.854370 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:53:58.854376 kernel: cpuidle: using governor menu Sep 8 23:53:58.854383 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:53:58.854390 kernel: ASID allocator initialised with 32768 entries Sep 8 23:53:58.854417 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:53:58.854425 kernel: Serial: AMBA PL011 UART driver Sep 8 23:53:58.854432 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:53:58.854438 kernel: Modules: 0 pages in range for non-PLT usage Sep 8 23:53:58.854447 kernel: Modules: 509248 pages in range for PLT usage Sep 8 23:53:58.854454 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:53:58.854461 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:53:58.854468 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:53:58.854475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:53:58.854482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:53:58.854489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:53:58.854496 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:53:58.854503 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:53:58.854511 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:53:58.854519 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:53:58.854526 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:53:58.854533 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:53:58.854539 kernel: ACPI: Interpreter enabled Sep 8 23:53:58.854546 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:53:58.854553 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:53:58.854560 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:53:58.854567 kernel: printk: console [ttyAMA0] enabled Sep 8 23:53:58.854576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:53:58.854727 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:53:58.854806 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:53:58.854871 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:53:58.854932 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:53:58.854995 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:53:58.855004 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:53:58.855015 kernel: PCI host bridge to bus 0000:00 Sep 8 23:53:58.855092 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:53:58.855170 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:53:58.855231 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:53:58.855290 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:53:58.855377 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 8 23:53:58.855454 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:53:58.855525 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 8 23:53:58.855592 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 8 23:53:58.855673 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:53:58.855742 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:53:58.855812 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 8 23:53:58.855877 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 8 23:53:58.855940 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:53:58.856002 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:53:58.856072 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:53:58.856082 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:53:58.856089 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:53:58.856097 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:53:58.856104 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:53:58.856112 kernel: iommu: Default domain type: Translated Sep 8 23:53:58.856121 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:53:58.856131 kernel: efivars: Registered efivars operations Sep 8 23:53:58.856147 kernel: vgaarb: loaded Sep 8 23:53:58.856155 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:53:58.856163 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:53:58.856170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:53:58.856181 kernel: pnp: PnP ACPI init Sep 8 23:53:58.856262 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:53:58.856272 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:53:58.856282 kernel: NET: Registered PF_INET protocol family Sep 8 23:53:58.856290 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:53:58.856302 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:53:58.856312 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:53:58.856320 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:53:58.856327 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:53:58.856334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:53:58.856342 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:53:58.856349 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:53:58.856358 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:53:58.856365 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:53:58.856372 kernel: kvm [1]: HYP mode not available Sep 8 23:53:58.856379 kernel: Initialise system trusted keyrings Sep 8 23:53:58.856386 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:53:58.856393 kernel: Key type asymmetric registered Sep 8 23:53:58.856400 kernel: Asymmetric key parser 'x509' registered Sep 8 23:53:58.856407 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 8 23:53:58.856414 kernel: io scheduler mq-deadline registered Sep 8 23:53:58.856423 kernel: io scheduler kyber registered Sep 8 23:53:58.856430 kernel: io scheduler bfq registered Sep 8 23:53:58.856437 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:53:58.856445 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:53:58.856452 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:53:58.856525 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:53:58.856535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:53:58.856556 kernel: thunder_xcv, ver 1.0 Sep 8 23:53:58.856563 kernel: thunder_bgx, ver 1.0 Sep 8 23:53:58.856573 kernel: nicpf, ver 1.0 Sep 8 23:53:58.856581 kernel: nicvf, ver 1.0 Sep 8 23:53:58.856664 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:53:58.856730 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:53:58 UTC (1757375638) Sep 8 23:53:58.856739 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:53:58.856747 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 8 23:53:58.856754 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 8 23:53:58.856761 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:53:58.856771 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:53:58.856778 kernel: Segment Routing with IPv6 Sep 8 23:53:58.856788 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:53:58.856795 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:53:58.856802 kernel: Key type dns_resolver registered Sep 8 23:53:58.856809 kernel: registered taskstats version 1 Sep 8 23:53:58.856816 kernel: Loading compiled-in X.509 certificates Sep 8 23:53:58.856823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 98feb45e0c7a714eab78dfe8a165eb91758e42e9' Sep 8 23:53:58.856831 kernel: Key type .fscrypt registered Sep 8 23:53:58.856840 kernel: Key type fscrypt-provisioning registered Sep 8 23:53:58.856847 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:53:58.856854 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:53:58.856861 kernel: ima: No architecture policies found Sep 8 23:53:58.856868 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:53:58.856875 kernel: clk: Disabling unused clocks Sep 8 23:53:58.856882 kernel: Freeing unused kernel memory: 38400K Sep 8 23:53:58.856889 kernel: Run /init as init process Sep 8 23:53:58.856896 kernel: with arguments: Sep 8 23:53:58.856904 kernel: /init Sep 8 23:53:58.856911 kernel: with environment: Sep 8 23:53:58.856918 kernel: HOME=/ Sep 8 23:53:58.856925 kernel: TERM=linux Sep 8 23:53:58.856932 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:53:58.856940 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:53:58.856950 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:53:58.856959 systemd[1]: Detected virtualization kvm. Sep 8 23:53:58.856967 systemd[1]: Detected architecture arm64. Sep 8 23:53:58.856974 systemd[1]: Running in initrd. Sep 8 23:53:58.856981 systemd[1]: No hostname configured, using default hostname. Sep 8 23:53:58.856989 systemd[1]: Hostname set to . Sep 8 23:53:58.856997 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:53:58.857004 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:53:58.857012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:53:58.857019 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:53:58.857029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:53:58.857037 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:53:58.857045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:53:58.857054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:53:58.857062 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:53:58.857070 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:53:58.857079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:53:58.857087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:53:58.857095 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:53:58.857102 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:53:58.857110 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:53:58.857118 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:53:58.857125 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:53:58.857133 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:53:58.857158 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:53:58.857168 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:53:58.857176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:53:58.857184 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:53:58.857192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:53:58.857199 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:53:58.857207 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:53:58.857215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:53:58.857223 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:53:58.857232 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:53:58.857240 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:53:58.857248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:53:58.857255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:58.857263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:53:58.857271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:53:58.857285 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:53:58.857315 systemd-journald[239]: Collecting audit messages is disabled. Sep 8 23:53:58.857334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:53:58.857344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:58.857352 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:53:58.857360 systemd-journald[239]: Journal started Sep 8 23:53:58.857378 systemd-journald[239]: Runtime Journal (/run/log/journal/25019466177b490ba818e9cb88e06b8f) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:53:58.845432 systemd-modules-load[240]: Inserted module 'overlay' Sep 8 23:53:58.858947 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:53:58.860053 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:53:58.862266 kernel: Bridge firewalling registered Sep 8 23:53:58.860241 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 8 23:53:58.861334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:53:58.866299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:58.867748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:53:58.869962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:53:58.872661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:53:58.880326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:53:58.882232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:53:58.883397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:53:58.886919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:53:58.903328 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:58.905295 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:53:58.918976 dracut-cmdline[279]: dracut-dracut-053 Sep 8 23:53:58.921562 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:53:58.930014 systemd-resolved[273]: Positive Trust Anchors: Sep 8 23:53:58.930033 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:53:58.930064 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:53:58.935043 systemd-resolved[273]: Defaulting to hostname 'linux'. Sep 8 23:53:58.936172 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:53:58.937980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:53:58.991171 kernel: SCSI subsystem initialized Sep 8 23:53:58.995160 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:53:59.004192 kernel: iscsi: registered transport (tcp) Sep 8 23:53:59.016178 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:53:59.016216 kernel: QLogic iSCSI HBA Driver Sep 8 23:53:59.060170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:53:59.072302 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:53:59.088167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:53:59.088235 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:53:59.089481 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:53:59.136182 kernel: raid6: neonx8 gen() 15667 MB/s Sep 8 23:53:59.153158 kernel: raid6: neonx4 gen() 15656 MB/s Sep 8 23:53:59.170178 kernel: raid6: neonx2 gen() 13108 MB/s Sep 8 23:53:59.187171 kernel: raid6: neonx1 gen() 10365 MB/s Sep 8 23:53:59.204168 kernel: raid6: int64x8 gen() 6728 MB/s Sep 8 23:53:59.221161 kernel: raid6: int64x4 gen() 7245 MB/s Sep 8 23:53:59.238161 kernel: raid6: int64x2 gen() 6065 MB/s Sep 8 23:53:59.255158 kernel: raid6: int64x1 gen() 5005 MB/s Sep 8 23:53:59.255183 kernel: raid6: using algorithm neonx8 gen() 15667 MB/s Sep 8 23:53:59.272164 kernel: raid6: .... xor() 11854 MB/s, rmw enabled Sep 8 23:53:59.272219 kernel: raid6: using neon recovery algorithm Sep 8 23:53:59.277432 kernel: xor: measuring software checksum speed Sep 8 23:53:59.277455 kernel: 8regs : 21596 MB/sec Sep 8 23:53:59.278616 kernel: 32regs : 21681 MB/sec Sep 8 23:53:59.278630 kernel: arm64_neon : 27974 MB/sec Sep 8 23:53:59.278639 kernel: xor: using function: arm64_neon (27974 MB/sec) Sep 8 23:53:59.329180 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:53:59.342183 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:53:59.358335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:53:59.373296 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 8 23:53:59.377120 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:53:59.384505 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:53:59.396281 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 8 23:53:59.427465 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:53:59.450353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:53:59.498456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:53:59.514428 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:53:59.526992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:53:59.528979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:53:59.530835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:53:59.532592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:53:59.541446 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:53:59.550674 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:53:59.563158 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:53:59.566172 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:53:59.569457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:53:59.569584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:53:59.577249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:53:59.577271 kernel: GPT:9289727 != 19775487 Sep 8 23:53:59.577288 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:53:59.577298 kernel: GPT:9289727 != 19775487 Sep 8 23:53:59.577309 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:53:59.577318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:53:59.576281 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:59.577338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:53:59.577702 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:59.580027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:59.590390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:53:59.597157 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (507) Sep 8 23:53:59.597199 kernel: BTRFS: device fsid 75950a77-34ea-4c25-8b07-0ac9de89ed80 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (519) Sep 8 23:53:59.610018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:53:59.617854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:53:59.624152 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:53:59.625188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:53:59.638441 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:53:59.645654 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:53:59.664318 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:53:59.665907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:53:59.669777 disk-uuid[554]: Primary Header is updated. Sep 8 23:53:59.669777 disk-uuid[554]: Secondary Entries is updated. Sep 8 23:53:59.669777 disk-uuid[554]: Secondary Header is updated. Sep 8 23:53:59.674168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:53:59.688622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:00.685304 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:54:00.686054 disk-uuid[555]: The operation has completed successfully. Sep 8 23:54:00.720093 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:54:00.720201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:54:00.755295 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:54:00.758764 sh[574]: Success Sep 8 23:54:00.772168 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 8 23:54:00.805734 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:54:00.818650 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:54:00.821624 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:54:00.831349 kernel: BTRFS info (device dm-0): first mount of filesystem 75950a77-34ea-4c25-8b07-0ac9de89ed80 Sep 8 23:54:00.831387 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:54:00.831397 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:54:00.833395 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:54:00.833437 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:54:00.837019 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:54:00.838373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:54:00.846398 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:54:00.848163 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:54:00.863124 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:54:00.863188 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:54:00.863200 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:00.869275 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:00.874365 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:54:00.879744 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:54:00.886348 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:54:00.949549 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:54:00.959353 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:54:00.968823 ignition[664]: Ignition 2.20.0 Sep 8 23:54:00.968838 ignition[664]: Stage: fetch-offline Sep 8 23:54:00.968886 ignition[664]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:00.968895 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:00.969048 ignition[664]: parsed url from cmdline: "" Sep 8 23:54:00.969053 ignition[664]: no config URL provided Sep 8 23:54:00.969058 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:54:00.969065 ignition[664]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:54:00.969087 ignition[664]: op(1): [started] loading QEMU firmware config module Sep 8 23:54:00.969091 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:54:00.974400 ignition[664]: op(1): [finished] loading QEMU firmware config module Sep 8 23:54:00.986759 systemd-networkd[759]: lo: Link UP Sep 8 23:54:00.986771 systemd-networkd[759]: lo: Gained carrier Sep 8 23:54:00.987589 systemd-networkd[759]: Enumeration completed Sep 8 23:54:00.987772 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:54:00.989444 systemd[1]: Reached target network.target - Network. Sep 8 23:54:00.991276 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:00.991281 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:54:00.991851 systemd-networkd[759]: eth0: Link UP Sep 8 23:54:00.991854 systemd-networkd[759]: eth0: Gained carrier Sep 8 23:54:00.991861 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:01.006433 ignition[664]: parsing config with SHA512: 9adf6cc580b0fa4103d59225a2fad611284821906b907e27a804a7dd4951133f598d6a1293841b461d3b18ca65d9adeef3169f86d035693c6f34b861b7b36aa2 Sep 8 23:54:01.010967 unknown[664]: fetched base config from "system" Sep 8 23:54:01.010976 unknown[664]: fetched user config from "qemu" Sep 8 23:54:01.011618 ignition[664]: fetch-offline: fetch-offline passed Sep 8 23:54:01.011697 ignition[664]: Ignition finished successfully Sep 8 23:54:01.013057 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:54:01.014945 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:54:01.015196 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:54:01.021378 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:54:01.033357 ignition[766]: Ignition 2.20.0 Sep 8 23:54:01.033366 ignition[766]: Stage: kargs Sep 8 23:54:01.033524 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:01.033533 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:01.034398 ignition[766]: kargs: kargs passed Sep 8 23:54:01.034442 ignition[766]: Ignition finished successfully Sep 8 23:54:01.037920 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:54:01.046377 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:54:01.055631 ignition[777]: Ignition 2.20.0 Sep 8 23:54:01.055642 ignition[777]: Stage: disks Sep 8 23:54:01.055794 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:01.055803 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:01.056652 ignition[777]: disks: disks passed Sep 8 23:54:01.056697 ignition[777]: Ignition finished successfully Sep 8 23:54:01.061186 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:54:01.062863 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:54:01.063792 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:54:01.066055 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:54:01.068183 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:54:01.070362 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:54:01.078826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:54:01.088370 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:54:01.096319 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:54:01.104290 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:54:01.144164 kernel: EXT4-fs (vda9): mounted filesystem 3b93848a-00fd-42cd-b996-7bf357d8ae77 r/w with ordered data mode. Quota mode: none. Sep 8 23:54:01.144192 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:54:01.145317 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:54:01.156235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:54:01.157882 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:54:01.159183 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:54:01.159229 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:54:01.165005 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (796) Sep 8 23:54:01.159254 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:54:01.168859 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:54:01.168878 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:54:01.168888 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:01.163960 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:54:01.166824 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:54:01.173244 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:01.174022 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:54:01.202028 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:54:01.205920 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:54:01.210006 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:54:01.214405 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:54:01.294235 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:54:01.304244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:54:01.306589 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:54:01.311170 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:54:01.327224 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:54:01.331514 ignition[911]: INFO : Ignition 2.20.0 Sep 8 23:54:01.331514 ignition[911]: INFO : Stage: mount Sep 8 23:54:01.331514 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:01.331514 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:01.331514 ignition[911]: INFO : mount: mount passed Sep 8 23:54:01.331514 ignition[911]: INFO : Ignition finished successfully Sep 8 23:54:01.332819 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:54:01.344280 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:54:01.962105 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:54:01.975341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:54:01.981596 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (925) Sep 8 23:54:01.981622 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:54:01.982577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:54:01.982590 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:54:01.985158 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:54:01.986051 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:54:02.000938 ignition[942]: INFO : Ignition 2.20.0 Sep 8 23:54:02.000938 ignition[942]: INFO : Stage: files Sep 8 23:54:02.002289 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:02.002289 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:02.002289 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:54:02.005493 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:54:02.005493 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:54:02.005493 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:54:02.005493 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:54:02.005493 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:54:02.004988 unknown[942]: wrote ssh authorized keys file for user: core Sep 8 23:54:02.012045 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:54:02.012045 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 8 23:54:02.078189 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:54:02.477120 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:54:02.478680 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:54:02.488064 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 8 23:54:02.766524 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 8 23:54:03.016288 systemd-networkd[759]: eth0: Gained IPv6LL Sep 8 23:54:03.212568 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:54:03.212568 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 8 23:54:03.215860 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:54:03.228721 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:54:03.231973 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:54:03.234453 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:54:03.234453 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:54:03.234453 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:54:03.234453 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:54:03.234453 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:54:03.234453 ignition[942]: INFO : files: files passed Sep 8 23:54:03.234453 ignition[942]: INFO : Ignition finished successfully Sep 8 23:54:03.235050 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:54:03.241284 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:54:03.243170 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:54:03.245539 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:54:03.245660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:54:03.249633 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:54:03.251692 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:03.251692 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:03.254260 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:54:03.253779 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:54:03.255270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:54:03.262526 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:54:03.280409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:54:03.280539 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:54:03.282306 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:54:03.283876 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:54:03.285276 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:54:03.286068 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:54:03.301278 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:54:03.311311 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:54:03.319150 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:54:03.320081 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:54:03.321743 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:54:03.323075 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:54:03.323208 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:54:03.325132 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:54:03.326787 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:54:03.328034 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:54:03.329378 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:54:03.330915 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:54:03.332773 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:54:03.334212 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:54:03.335801 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:54:03.337320 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:54:03.338713 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:54:03.339838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:54:03.339968 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:54:03.341755 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:54:03.343225 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:54:03.344792 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:54:03.348226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:54:03.349238 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:54:03.349356 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:54:03.351787 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:54:03.351899 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:54:03.353543 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:54:03.354890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:54:03.354989 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:54:03.356589 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:54:03.357792 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:54:03.359104 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:54:03.359201 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:54:03.360993 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:54:03.361071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:54:03.362286 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:54:03.362388 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:54:03.364206 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:54:03.364303 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:54:03.375305 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:54:03.375998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:54:03.376120 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:54:03.378463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:54:03.379789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:54:03.379904 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:54:03.381353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:54:03.381439 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:54:03.386689 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:54:03.386779 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:54:03.390510 ignition[997]: INFO : Ignition 2.20.0 Sep 8 23:54:03.390510 ignition[997]: INFO : Stage: umount Sep 8 23:54:03.390510 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:54:03.390510 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:54:03.390510 ignition[997]: INFO : umount: umount passed Sep 8 23:54:03.390510 ignition[997]: INFO : Ignition finished successfully Sep 8 23:54:03.390550 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:54:03.390657 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:54:03.392833 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:54:03.393127 systemd[1]: Stopped target network.target - Network. Sep 8 23:54:03.393822 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:54:03.393873 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:54:03.396779 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:54:03.396827 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:54:03.397924 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:54:03.397962 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:54:03.399540 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:54:03.399590 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:54:03.400592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:54:03.402020 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:54:03.407619 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:54:03.407715 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:54:03.411034 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:54:03.411273 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:54:03.411376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:54:03.413540 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:54:03.414130 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:54:03.414200 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:54:03.426281 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:54:03.426984 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:54:03.427038 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:54:03.428713 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:54:03.428754 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:03.431238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:54:03.431281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:54:03.432753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:54:03.432789 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:54:03.435529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:54:03.438247 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:54:03.438305 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:54:03.445823 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:54:03.445932 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:54:03.453766 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:54:03.453919 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:54:03.455760 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:54:03.455839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:54:03.457491 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:54:03.457555 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:54:03.459084 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:54:03.459123 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:54:03.460616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:54:03.460665 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:54:03.462809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:54:03.462857 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:54:03.464979 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:54:03.465026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:54:03.467244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:54:03.467288 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:54:03.480344 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:54:03.481160 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:54:03.481222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:54:03.483772 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:54:03.483817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:03.486938 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 8 23:54:03.486990 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:54:03.488076 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:54:03.488190 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:54:03.490221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:54:03.491729 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:54:03.501003 systemd[1]: Switching root. Sep 8 23:54:03.528106 systemd-journald[239]: Journal stopped Sep 8 23:54:04.308678 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 8 23:54:04.308731 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:54:04.308750 kernel: SELinux: policy capability open_perms=1 Sep 8 23:54:04.308759 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:54:04.308768 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:54:04.308777 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:54:04.308787 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:54:04.308796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:54:04.308809 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:54:04.308818 kernel: audit: type=1403 audit(1757375643.708:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:54:04.308829 systemd[1]: Successfully loaded SELinux policy in 30.289ms. Sep 8 23:54:04.308844 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.315ms. Sep 8 23:54:04.308856 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:54:04.308867 systemd[1]: Detected virtualization kvm. Sep 8 23:54:04.308877 systemd[1]: Detected architecture arm64. Sep 8 23:54:04.308905 systemd[1]: Detected first boot. Sep 8 23:54:04.308916 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:54:04.308927 zram_generator::config[1043]: No configuration found. Sep 8 23:54:04.308938 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:54:04.308952 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:54:04.308964 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:54:04.308974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:54:04.308985 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:54:04.308995 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:54:04.309006 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:54:04.309016 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:54:04.309026 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:54:04.309036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:54:04.309048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:54:04.309059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:54:04.309069 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:54:04.309079 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:54:04.309089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:54:04.309100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:54:04.309112 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:54:04.309123 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:54:04.309136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:54:04.309170 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:54:04.309181 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:54:04.309192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:54:04.309203 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:54:04.309213 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:54:04.309224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:54:04.309234 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:54:04.309246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:54:04.309259 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:54:04.309270 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:54:04.309280 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:54:04.309290 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:54:04.309301 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:54:04.309311 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:54:04.309321 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:54:04.309331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:54:04.309342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:54:04.309352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:54:04.309362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:54:04.309373 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:54:04.309385 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:54:04.309395 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:54:04.309405 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:54:04.309416 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:54:04.309427 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:54:04.309454 systemd[1]: Reached target machines.target - Containers. Sep 8 23:54:04.309466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:54:04.309476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:04.309486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:54:04.309496 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:54:04.309507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:04.309517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:54:04.309528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:04.309540 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:54:04.309551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:04.309561 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:54:04.309572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:54:04.309591 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:54:04.309602 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:54:04.309612 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:54:04.309623 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:04.309634 kernel: fuse: init (API version 7.39) Sep 8 23:54:04.309646 kernel: loop: module loaded Sep 8 23:54:04.309655 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:54:04.309667 kernel: ACPI: bus type drm_connector registered Sep 8 23:54:04.309677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:54:04.309688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:54:04.309699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:54:04.309710 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:54:04.309721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:54:04.309731 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:54:04.309743 systemd[1]: Stopped verity-setup.service. Sep 8 23:54:04.309772 systemd-journald[1115]: Collecting audit messages is disabled. Sep 8 23:54:04.309795 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:54:04.309807 systemd-journald[1115]: Journal started Sep 8 23:54:04.309833 systemd-journald[1115]: Runtime Journal (/run/log/journal/25019466177b490ba818e9cb88e06b8f) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:54:04.309868 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:54:04.117736 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:54:04.131120 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:54:04.131538 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:54:04.313154 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:54:04.313690 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:54:04.314695 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:54:04.315687 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:54:04.316659 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:54:04.319170 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:54:04.320319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:54:04.321567 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:54:04.321751 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:54:04.322913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:04.323079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:04.324300 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:54:04.324456 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:54:04.325759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:04.325924 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:04.328493 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:54:04.328662 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:54:04.329805 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:04.329970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:04.331274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:54:04.332437 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:54:04.333896 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:54:04.335250 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:54:04.348080 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:54:04.355270 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:54:04.357178 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:54:04.358021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:54:04.358056 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:54:04.359911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:54:04.362017 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:54:04.364067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:54:04.365066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:04.366445 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:54:04.368169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:54:04.369121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:54:04.373308 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:54:04.374643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:54:04.375467 systemd-journald[1115]: Time spent on flushing to /var/log/journal/25019466177b490ba818e9cb88e06b8f is 25.489ms for 864 entries. Sep 8 23:54:04.375467 systemd-journald[1115]: System Journal (/var/log/journal/25019466177b490ba818e9cb88e06b8f) is 8M, max 195.6M, 187.6M free. Sep 8 23:54:04.410317 systemd-journald[1115]: Received client request to flush runtime journal. Sep 8 23:54:04.410370 kernel: loop0: detected capacity change from 0 to 113512 Sep 8 23:54:04.376027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:54:04.381134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:54:04.384456 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:54:04.389324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:54:04.390714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:54:04.391766 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:54:04.395205 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:54:04.396445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:54:04.408364 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:54:04.416779 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:54:04.414172 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:54:04.426466 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:54:04.433777 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:54:04.438873 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:54:04.442496 kernel: loop1: detected capacity change from 0 to 207008 Sep 8 23:54:04.446382 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:54:04.448192 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:54:04.450285 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:54:04.461437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:54:04.463273 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 8 23:54:04.474163 kernel: loop2: detected capacity change from 0 to 123192 Sep 8 23:54:04.478749 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 8 23:54:04.478783 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Sep 8 23:54:04.487206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:54:04.530167 kernel: loop3: detected capacity change from 0 to 113512 Sep 8 23:54:04.534161 kernel: loop4: detected capacity change from 0 to 207008 Sep 8 23:54:04.541195 kernel: loop5: detected capacity change from 0 to 123192 Sep 8 23:54:04.545381 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:54:04.545879 (sd-merge)[1187]: Merged extensions into '/usr'. Sep 8 23:54:04.550337 systemd[1]: Reload requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:54:04.550352 systemd[1]: Reloading... Sep 8 23:54:04.605168 zram_generator::config[1218]: No configuration found. Sep 8 23:54:04.648000 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:54:04.709066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:04.765307 systemd[1]: Reloading finished in 214 ms. Sep 8 23:54:04.783412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:54:04.786174 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:54:04.808526 systemd[1]: Starting ensure-sysext.service... Sep 8 23:54:04.810268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:54:04.825644 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:54:04.825661 systemd[1]: Reloading... Sep 8 23:54:04.826852 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:54:04.827373 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:54:04.828117 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:54:04.828436 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 8 23:54:04.828563 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 8 23:54:04.831473 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:54:04.831598 systemd-tmpfiles[1250]: Skipping /boot Sep 8 23:54:04.840835 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:54:04.840963 systemd-tmpfiles[1250]: Skipping /boot Sep 8 23:54:04.876169 zram_generator::config[1279]: No configuration found. Sep 8 23:54:04.964343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:05.020823 systemd[1]: Reloading finished in 194 ms. Sep 8 23:54:05.034184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:54:05.051185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:54:05.059545 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:54:05.062082 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:54:05.064694 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:54:05.068450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:54:05.073841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:54:05.076408 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:54:05.079674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:05.083464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:05.085748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:05.087933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:05.089067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:05.089199 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:05.092677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:05.092855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:05.094890 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:05.095091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:05.096796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:05.096957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:05.099447 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:54:05.108089 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:54:05.110600 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Sep 8 23:54:05.113703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:54:05.120537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:54:05.125440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:54:05.127597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:54:05.133039 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:54:05.134169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:54:05.134303 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:54:05.137580 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:54:05.141905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:54:05.146579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:54:05.151508 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:54:05.151608 augenrules[1368]: No rules Sep 8 23:54:05.153912 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:54:05.154171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:54:05.157701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:54:05.157910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:54:05.161331 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:54:05.161499 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:54:05.163022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:54:05.165211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:54:05.166969 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:54:05.167136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:54:05.176191 systemd[1]: Finished ensure-sysext.service. Sep 8 23:54:05.208380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:54:05.209262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:54:05.209333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:54:05.211272 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:54:05.212242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:54:05.212480 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:54:05.215169 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:54:05.218163 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:54:05.249178 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1365) Sep 8 23:54:05.270329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:54:05.276848 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:54:05.292213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:54:05.299075 systemd-networkd[1386]: lo: Link UP Sep 8 23:54:05.299345 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:54:05.299458 systemd-networkd[1386]: lo: Gained carrier Sep 8 23:54:05.300655 systemd-networkd[1386]: Enumeration completed Sep 8 23:54:05.300934 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:54:05.301524 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:05.301684 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:54:05.302244 systemd-networkd[1386]: eth0: Link UP Sep 8 23:54:05.302372 systemd-networkd[1386]: eth0: Gained carrier Sep 8 23:54:05.302434 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:54:05.303395 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:54:05.306342 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:54:05.313582 systemd-resolved[1319]: Positive Trust Anchors: Sep 8 23:54:05.313600 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:54:05.313638 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:54:05.316839 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:54:05.321060 systemd-resolved[1319]: Defaulting to hostname 'linux'. Sep 8 23:54:05.321236 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:54:05.322027 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Sep 8 23:54:05.322658 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:54:05.322710 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2025-09-08 23:54:05.489409 UTC. Sep 8 23:54:05.323919 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:54:05.325360 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:54:05.327104 systemd[1]: Reached target network.target - Network. Sep 8 23:54:05.328471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:54:05.346399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:54:05.352053 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:54:05.356075 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:54:05.368179 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:54:05.381297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:54:05.395672 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:54:05.396906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:54:05.397889 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:54:05.398803 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:54:05.399854 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:54:05.401031 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:54:05.402043 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:54:05.403114 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:54:05.404066 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:54:05.404097 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:54:05.405109 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:54:05.406606 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:54:05.408850 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:54:05.412164 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:54:05.413255 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:54:05.414222 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:54:05.421027 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:54:05.422377 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:54:05.424305 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:54:05.425654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:54:05.426615 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:54:05.427343 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:54:05.428055 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:54:05.428086 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:54:05.428947 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:54:05.430758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:54:05.431752 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:54:05.434293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:54:05.438394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:54:05.439588 jq[1424]: false Sep 8 23:54:05.440302 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:54:05.441363 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:54:05.443820 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:54:05.445793 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:54:05.448369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:54:05.454125 dbus-daemon[1423]: [system] SELinux support is enabled Sep 8 23:54:05.454418 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:54:05.455512 extend-filesystems[1425]: Found loop3 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found loop4 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found loop5 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda1 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda2 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda3 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found usr Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda4 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda6 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda7 Sep 8 23:54:05.457711 extend-filesystems[1425]: Found vda9 Sep 8 23:54:05.457711 extend-filesystems[1425]: Checking size of /dev/vda9 Sep 8 23:54:05.456982 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:54:05.457471 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:54:05.458460 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:54:05.461620 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:54:05.466422 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:54:05.469601 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:54:05.471521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:54:05.473212 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:54:05.473585 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:54:05.473716 jq[1439]: true Sep 8 23:54:05.473756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:54:05.476563 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:54:05.476781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:54:05.483152 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1355) Sep 8 23:54:05.488626 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:54:05.497530 update_engine[1438]: I20250908 23:54:05.497375 1438 main.cc:92] Flatcar Update Engine starting Sep 8 23:54:05.497849 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:54:05.497873 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:54:05.499064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:54:05.499085 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:54:05.500287 extend-filesystems[1425]: Resized partition /dev/vda9 Sep 8 23:54:05.504024 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:54:05.503697 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:54:05.507238 update_engine[1438]: I20250908 23:54:05.500506 1438 update_check_scheduler.cc:74] Next update check in 4m13s Sep 8 23:54:05.512193 jq[1446]: true Sep 8 23:54:05.512369 tar[1443]: linux-arm64/LICENSE Sep 8 23:54:05.512369 tar[1443]: linux-arm64/helm Sep 8 23:54:05.511475 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:54:05.515454 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:54:05.534163 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:54:05.547410 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:54:05.547410 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:54:05.547410 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:54:05.554225 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Sep 8 23:54:05.549786 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:54:05.553594 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:54:05.553832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:54:05.553991 systemd-logind[1433]: New seat seat0. Sep 8 23:54:05.558835 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:54:05.581152 bash[1479]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:54:05.583212 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:54:05.584801 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:54:05.604370 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:54:05.671510 containerd[1447]: time="2025-09-08T23:54:05.671409360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:54:05.698792 containerd[1447]: time="2025-09-08T23:54:05.698742360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701245920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701286200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701305080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701473920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701489720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701542840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701554040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701763000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701778360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701790200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702463 containerd[1447]: time="2025-09-08T23:54:05.701798920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.701869640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.702063080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.702203200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.702216840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.702289520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:54:05.702752 containerd[1447]: time="2025-09-08T23:54:05.702330400Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:54:05.705882 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:54:05.707260 containerd[1447]: time="2025-09-08T23:54:05.707114600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:54:05.707260 containerd[1447]: time="2025-09-08T23:54:05.707184840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:54:05.707260 containerd[1447]: time="2025-09-08T23:54:05.707201400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:54:05.707260 containerd[1447]: time="2025-09-08T23:54:05.707218240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:54:05.707260 containerd[1447]: time="2025-09-08T23:54:05.707233560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:54:05.707555 containerd[1447]: time="2025-09-08T23:54:05.707505840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.707878520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.707995200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708017440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708031920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708045600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708058000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708070240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708083480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708096840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708109400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708121320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708133360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708178920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708595 containerd[1447]: time="2025-09-08T23:54:05.708193440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708205160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708218800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708230360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708244240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708255760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708267720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708282840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708297080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708310200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708324480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708336800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708353440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708376240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708390040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.708859 containerd[1447]: time="2025-09-08T23:54:05.708401400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:54:05.709157 containerd[1447]: time="2025-09-08T23:54:05.709118480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:54:05.709298 containerd[1447]: time="2025-09-08T23:54:05.709278560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:54:05.709355 containerd[1447]: time="2025-09-08T23:54:05.709342040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:54:05.709410 containerd[1447]: time="2025-09-08T23:54:05.709397200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:54:05.709459 containerd[1447]: time="2025-09-08T23:54:05.709445800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.709512 containerd[1447]: time="2025-09-08T23:54:05.709500160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:54:05.709574 containerd[1447]: time="2025-09-08T23:54:05.709551640Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:54:05.709626 containerd[1447]: time="2025-09-08T23:54:05.709614520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:54:05.710059 containerd[1447]: time="2025-09-08T23:54:05.710008320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:54:05.710243 containerd[1447]: time="2025-09-08T23:54:05.710225640Z" level=info msg="Connect containerd service" Sep 8 23:54:05.710993 containerd[1447]: time="2025-09-08T23:54:05.710330720Z" level=info msg="using legacy CRI server" Sep 8 23:54:05.710993 containerd[1447]: time="2025-09-08T23:54:05.710347760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:54:05.710993 containerd[1447]: time="2025-09-08T23:54:05.710603560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:54:05.711429 containerd[1447]: time="2025-09-08T23:54:05.711399200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:54:05.711691 containerd[1447]: time="2025-09-08T23:54:05.711661600Z" level=info msg="Start subscribing containerd event" Sep 8 23:54:05.711769 containerd[1447]: time="2025-09-08T23:54:05.711757520Z" level=info msg="Start recovering state" Sep 8 23:54:05.711892 containerd[1447]: time="2025-09-08T23:54:05.711879200Z" level=info msg="Start event monitor" Sep 8 23:54:05.711961 containerd[1447]: time="2025-09-08T23:54:05.711945000Z" level=info msg="Start snapshots syncer" Sep 8 23:54:05.712014 containerd[1447]: time="2025-09-08T23:54:05.712003080Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:54:05.712057 containerd[1447]: time="2025-09-08T23:54:05.712047040Z" level=info msg="Start streaming server" Sep 8 23:54:05.712744 containerd[1447]: time="2025-09-08T23:54:05.712721400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:54:05.712853 containerd[1447]: time="2025-09-08T23:54:05.712840200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:54:05.713020 containerd[1447]: time="2025-09-08T23:54:05.713001120Z" level=info msg="containerd successfully booted in 0.044193s" Sep 8 23:54:05.713102 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:54:05.726428 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:54:05.748441 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:54:05.755277 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:54:05.757226 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:54:05.772255 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:54:05.781209 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:54:05.784824 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:54:05.787394 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:54:05.788464 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:54:05.907648 tar[1443]: linux-arm64/README.md Sep 8 23:54:05.921684 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:54:06.410966 systemd-networkd[1386]: eth0: Gained IPv6LL Sep 8 23:54:06.413315 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:54:06.414835 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:54:06.427491 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:54:06.429914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:06.432043 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:54:06.447147 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:54:06.448294 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:54:06.450219 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:54:06.453907 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:54:06.992395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:06.993882 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:54:06.996468 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:54:06.996808 systemd[1]: Startup finished in 521ms (kernel) + 5.035s (initrd) + 3.319s (userspace) = 8.875s. Sep 8 23:54:07.366890 kubelet[1537]: E0908 23:54:07.366778 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:54:07.369576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:54:07.369721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:54:07.371239 systemd[1]: kubelet.service: Consumed 761ms CPU time, 258M memory peak. Sep 8 23:54:11.424256 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:54:11.425606 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:59722.service - OpenSSH per-connection server daemon (10.0.0.1:59722). Sep 8 23:54:11.487771 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 59722 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:11.489058 sshd-session[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:11.500162 systemd-logind[1433]: New session 1 of user core. Sep 8 23:54:11.501134 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:54:11.513455 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:54:11.524194 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:54:11.526601 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:54:11.533350 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:54:11.535678 systemd-logind[1433]: New session c1 of user core. Sep 8 23:54:11.646429 systemd[1554]: Queued start job for default target default.target. Sep 8 23:54:11.656142 systemd[1554]: Created slice app.slice - User Application Slice. Sep 8 23:54:11.656194 systemd[1554]: Reached target paths.target - Paths. Sep 8 23:54:11.656234 systemd[1554]: Reached target timers.target - Timers. Sep 8 23:54:11.657535 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:54:11.666918 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:54:11.666983 systemd[1554]: Reached target sockets.target - Sockets. Sep 8 23:54:11.667020 systemd[1554]: Reached target basic.target - Basic System. Sep 8 23:54:11.667049 systemd[1554]: Reached target default.target - Main User Target. Sep 8 23:54:11.667074 systemd[1554]: Startup finished in 125ms. Sep 8 23:54:11.667335 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:54:11.668829 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:54:11.729750 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:59734.service - OpenSSH per-connection server daemon (10.0.0.1:59734). Sep 8 23:54:11.772442 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 59734 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:11.773788 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:11.778787 systemd-logind[1433]: New session 2 of user core. Sep 8 23:54:11.791355 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:54:11.843567 sshd[1567]: Connection closed by 10.0.0.1 port 59734 Sep 8 23:54:11.843923 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:11.860633 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:59734.service: Deactivated successfully. Sep 8 23:54:11.862494 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:54:11.863912 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:54:11.865446 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:59742.service - OpenSSH per-connection server daemon (10.0.0.1:59742). Sep 8 23:54:11.867520 systemd-logind[1433]: Removed session 2. Sep 8 23:54:11.908212 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 59742 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:11.909465 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:11.914700 systemd-logind[1433]: New session 3 of user core. Sep 8 23:54:11.924380 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:54:11.972630 sshd[1575]: Connection closed by 10.0.0.1 port 59742 Sep 8 23:54:11.972551 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:11.993108 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:59742.service: Deactivated successfully. Sep 8 23:54:11.995520 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:54:11.996732 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:54:11.998013 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:59748.service - OpenSSH per-connection server daemon (10.0.0.1:59748). Sep 8 23:54:11.998781 systemd-logind[1433]: Removed session 3. Sep 8 23:54:12.040013 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 59748 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:12.043046 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:12.048354 systemd-logind[1433]: New session 4 of user core. Sep 8 23:54:12.058369 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:54:12.110353 sshd[1583]: Connection closed by 10.0.0.1 port 59748 Sep 8 23:54:12.110718 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:12.121545 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:59748.service: Deactivated successfully. Sep 8 23:54:12.123124 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:54:12.125710 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:54:12.136485 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:59750.service - OpenSSH per-connection server daemon (10.0.0.1:59750). Sep 8 23:54:12.137802 systemd-logind[1433]: Removed session 4. Sep 8 23:54:12.176575 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 59750 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:12.177839 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:12.181541 systemd-logind[1433]: New session 5 of user core. Sep 8 23:54:12.194318 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:54:12.251633 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:54:12.251952 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:54:12.536414 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:54:12.536502 (dockerd)[1613]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:54:12.743947 dockerd[1613]: time="2025-09-08T23:54:12.743833916Z" level=info msg="Starting up" Sep 8 23:54:13.004628 dockerd[1613]: time="2025-09-08T23:54:13.004491111Z" level=info msg="Loading containers: start." Sep 8 23:54:13.235202 kernel: Initializing XFRM netlink socket Sep 8 23:54:13.292553 systemd-networkd[1386]: docker0: Link UP Sep 8 23:54:13.321373 dockerd[1613]: time="2025-09-08T23:54:13.321322170Z" level=info msg="Loading containers: done." Sep 8 23:54:13.333706 dockerd[1613]: time="2025-09-08T23:54:13.333652039Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:54:13.333856 dockerd[1613]: time="2025-09-08T23:54:13.333747777Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 8 23:54:13.333820 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck934584185-merged.mount: Deactivated successfully. Sep 8 23:54:13.333981 dockerd[1613]: time="2025-09-08T23:54:13.333938849Z" level=info msg="Daemon has completed initialization" Sep 8 23:54:13.360676 dockerd[1613]: time="2025-09-08T23:54:13.360618998Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:54:13.360914 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:54:13.898328 containerd[1447]: time="2025-09-08T23:54:13.898289141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 8 23:54:14.463639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079572425.mount: Deactivated successfully. Sep 8 23:54:15.261049 containerd[1447]: time="2025-09-08T23:54:15.259776482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:15.261049 containerd[1447]: time="2025-09-08T23:54:15.260310972Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Sep 8 23:54:15.261579 containerd[1447]: time="2025-09-08T23:54:15.261547477Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:15.264627 containerd[1447]: time="2025-09-08T23:54:15.264588826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:15.266434 containerd[1447]: time="2025-09-08T23:54:15.266399991Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.368070338s" Sep 8 23:54:15.266545 containerd[1447]: time="2025-09-08T23:54:15.266527465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Sep 8 23:54:15.267300 containerd[1447]: time="2025-09-08T23:54:15.267269770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 8 23:54:16.220812 containerd[1447]: time="2025-09-08T23:54:16.220765548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:16.221908 containerd[1447]: time="2025-09-08T23:54:16.221863739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Sep 8 23:54:16.223167 containerd[1447]: time="2025-09-08T23:54:16.223116937Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:16.225660 containerd[1447]: time="2025-09-08T23:54:16.225605997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:16.226789 containerd[1447]: time="2025-09-08T23:54:16.226760577Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 959.335164ms" Sep 8 23:54:16.226838 containerd[1447]: time="2025-09-08T23:54:16.226794402Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Sep 8 23:54:16.227499 containerd[1447]: time="2025-09-08T23:54:16.227321477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 8 23:54:17.231552 containerd[1447]: time="2025-09-08T23:54:17.231490097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:17.231962 containerd[1447]: time="2025-09-08T23:54:17.231908777Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Sep 8 23:54:17.232823 containerd[1447]: time="2025-09-08T23:54:17.232793926Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:17.238903 containerd[1447]: time="2025-09-08T23:54:17.238859905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:17.239884 containerd[1447]: time="2025-09-08T23:54:17.239860565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.012506556s" Sep 8 23:54:17.239927 containerd[1447]: time="2025-09-08T23:54:17.239889865Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Sep 8 23:54:17.240735 containerd[1447]: time="2025-09-08T23:54:17.240474096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:54:17.620129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:54:17.630354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:17.735359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:17.738358 (kubelet)[1879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:54:17.776158 kubelet[1879]: E0908 23:54:17.775904 1879 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:54:17.779749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:54:17.779906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:54:17.780321 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.5M memory peak. Sep 8 23:54:18.265994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896900194.mount: Deactivated successfully. Sep 8 23:54:18.642465 containerd[1447]: time="2025-09-08T23:54:18.642318743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:18.643048 containerd[1447]: time="2025-09-08T23:54:18.643009151Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 8 23:54:18.643957 containerd[1447]: time="2025-09-08T23:54:18.643901483Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:18.647355 containerd[1447]: time="2025-09-08T23:54:18.647133522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:18.647984 containerd[1447]: time="2025-09-08T23:54:18.647846062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.407329693s" Sep 8 23:54:18.647984 containerd[1447]: time="2025-09-08T23:54:18.647885186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 8 23:54:18.648804 containerd[1447]: time="2025-09-08T23:54:18.648775510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 8 23:54:19.409597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853903168.mount: Deactivated successfully. Sep 8 23:54:20.141715 containerd[1447]: time="2025-09-08T23:54:20.141664778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:20.142214 containerd[1447]: time="2025-09-08T23:54:20.142159966Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 8 23:54:20.143176 containerd[1447]: time="2025-09-08T23:54:20.143126584Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:20.147571 containerd[1447]: time="2025-09-08T23:54:20.147041493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:20.149389 containerd[1447]: time="2025-09-08T23:54:20.149351578Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.500537882s" Sep 8 23:54:20.149389 containerd[1447]: time="2025-09-08T23:54:20.149390744Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 8 23:54:20.149829 containerd[1447]: time="2025-09-08T23:54:20.149804952Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:54:21.016866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658581922.mount: Deactivated successfully. Sep 8 23:54:21.024747 containerd[1447]: time="2025-09-08T23:54:21.024687217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:21.026402 containerd[1447]: time="2025-09-08T23:54:21.026157062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:54:21.027241 containerd[1447]: time="2025-09-08T23:54:21.027194091Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:21.030450 containerd[1447]: time="2025-09-08T23:54:21.030408350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:21.032376 containerd[1447]: time="2025-09-08T23:54:21.032348956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 882.509537ms" Sep 8 23:54:21.032431 containerd[1447]: time="2025-09-08T23:54:21.032405033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:54:21.033044 containerd[1447]: time="2025-09-08T23:54:21.032871662Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 8 23:54:21.544739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1312526345.mount: Deactivated successfully. Sep 8 23:54:23.044521 containerd[1447]: time="2025-09-08T23:54:23.044454943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:23.079420 containerd[1447]: time="2025-09-08T23:54:23.079349396Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 8 23:54:23.081481 containerd[1447]: time="2025-09-08T23:54:23.081428425Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:23.086176 containerd[1447]: time="2025-09-08T23:54:23.085843156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:23.087019 containerd[1447]: time="2025-09-08T23:54:23.086986293Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.054085155s" Sep 8 23:54:23.087071 containerd[1447]: time="2025-09-08T23:54:23.087020486Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 8 23:54:27.822190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 8 23:54:27.833393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:27.842747 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 8 23:54:27.842846 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 8 23:54:27.843133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:27.851541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:27.872843 systemd[1]: Reload requested from client PID 2039 ('systemctl') (unit session-5.scope)... Sep 8 23:54:27.872859 systemd[1]: Reloading... Sep 8 23:54:27.949545 zram_generator::config[2086]: No configuration found. Sep 8 23:54:28.087166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:28.169700 systemd[1]: Reloading finished in 296 ms. Sep 8 23:54:28.208179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:28.210989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:28.212270 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:54:28.212482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:28.212526 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.1M memory peak. Sep 8 23:54:28.214074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:28.318949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:28.322635 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:28.358910 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:28.358910 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:54:28.358910 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:28.359247 kubelet[2130]: I0908 23:54:28.358893 2130 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:54:29.540757 kubelet[2130]: I0908 23:54:29.540705 2130 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:54:29.540757 kubelet[2130]: I0908 23:54:29.540740 2130 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:54:29.541101 kubelet[2130]: I0908 23:54:29.540992 2130 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:54:29.564228 kubelet[2130]: E0908 23:54:29.564165 2130 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:29.565817 kubelet[2130]: I0908 23:54:29.565783 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:54:29.571193 kubelet[2130]: E0908 23:54:29.571071 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:54:29.571193 kubelet[2130]: I0908 23:54:29.571103 2130 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:54:29.575808 kubelet[2130]: I0908 23:54:29.575785 2130 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:54:29.577032 kubelet[2130]: I0908 23:54:29.576969 2130 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:54:29.577182 kubelet[2130]: I0908 23:54:29.577007 2130 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:54:29.577269 kubelet[2130]: I0908 23:54:29.577257 2130 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:54:29.577269 kubelet[2130]: I0908 23:54:29.577268 2130 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:54:29.577491 kubelet[2130]: I0908 23:54:29.577475 2130 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:29.579784 kubelet[2130]: I0908 23:54:29.579763 2130 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:54:29.579821 kubelet[2130]: I0908 23:54:29.579788 2130 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:54:29.579821 kubelet[2130]: I0908 23:54:29.579806 2130 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:54:29.579821 kubelet[2130]: I0908 23:54:29.579816 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:54:29.582640 kubelet[2130]: I0908 23:54:29.582618 2130 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:54:29.583248 kubelet[2130]: W0908 23:54:29.583006 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:29.583248 kubelet[2130]: E0908 23:54:29.583070 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:29.583343 kubelet[2130]: I0908 23:54:29.583253 2130 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:54:29.583480 kubelet[2130]: W0908 23:54:29.583362 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:54:29.583564 kubelet[2130]: W0908 23:54:29.583524 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:29.583597 kubelet[2130]: E0908 23:54:29.583569 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:29.584155 kubelet[2130]: I0908 23:54:29.584127 2130 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:54:29.584199 kubelet[2130]: I0908 23:54:29.584178 2130 server.go:1287] "Started kubelet" Sep 8 23:54:29.584858 kubelet[2130]: I0908 23:54:29.584812 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:54:29.585236 kubelet[2130]: I0908 23:54:29.585126 2130 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:54:29.585236 kubelet[2130]: I0908 23:54:29.585209 2130 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:54:29.587110 kubelet[2130]: I0908 23:54:29.585829 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:54:29.587110 kubelet[2130]: I0908 23:54:29.586019 2130 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:54:29.587858 kubelet[2130]: E0908 23:54:29.587548 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.99:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186373dc4a7a519d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:54:29.584155037 +0000 UTC m=+1.258528107,LastTimestamp:2025-09-08 23:54:29.584155037 +0000 UTC m=+1.258528107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:54:29.588008 kubelet[2130]: I0908 23:54:29.587981 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:54:29.589393 kubelet[2130]: I0908 23:54:29.589260 2130 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:54:29.589393 kubelet[2130]: I0908 23:54:29.589373 2130 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:54:29.589393 kubelet[2130]: E0908 23:54:29.589374 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:29.589506 kubelet[2130]: I0908 23:54:29.589432 2130 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:54:29.589783 kubelet[2130]: W0908 23:54:29.589733 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:29.589832 kubelet[2130]: E0908 23:54:29.589782 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:29.589972 kubelet[2130]: E0908 23:54:29.589941 2130 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:54:29.590808 kubelet[2130]: I0908 23:54:29.590779 2130 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:54:29.590808 kubelet[2130]: I0908 23:54:29.590795 2130 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:54:29.590930 kubelet[2130]: I0908 23:54:29.590907 2130 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:54:29.591399 kubelet[2130]: E0908 23:54:29.591364 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="200ms" Sep 8 23:54:29.603716 kubelet[2130]: I0908 23:54:29.603682 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:54:29.603716 kubelet[2130]: I0908 23:54:29.603703 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:54:29.603716 kubelet[2130]: I0908 23:54:29.603723 2130 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:29.606487 kubelet[2130]: I0908 23:54:29.606439 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:54:29.607481 kubelet[2130]: I0908 23:54:29.607436 2130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:54:29.607481 kubelet[2130]: I0908 23:54:29.607459 2130 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:54:29.607481 kubelet[2130]: I0908 23:54:29.607478 2130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:54:29.607481 kubelet[2130]: I0908 23:54:29.607486 2130 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:54:29.607599 kubelet[2130]: E0908 23:54:29.607522 2130 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:54:29.689820 kubelet[2130]: E0908 23:54:29.689746 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:29.702198 kubelet[2130]: W0908 23:54:29.701965 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:29.702198 kubelet[2130]: E0908 23:54:29.702033 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:29.702297 kubelet[2130]: I0908 23:54:29.702229 2130 policy_none.go:49] "None policy: Start" Sep 8 23:54:29.702297 kubelet[2130]: I0908 23:54:29.702251 2130 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:54:29.702297 kubelet[2130]: I0908 23:54:29.702264 2130 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:54:29.707096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:54:29.708450 kubelet[2130]: E0908 23:54:29.707854 2130 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:54:29.718946 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:54:29.722034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:54:29.733926 kubelet[2130]: I0908 23:54:29.732838 2130 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:54:29.733926 kubelet[2130]: I0908 23:54:29.733026 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:54:29.733926 kubelet[2130]: I0908 23:54:29.733037 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:54:29.733926 kubelet[2130]: I0908 23:54:29.733371 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:54:29.734331 kubelet[2130]: E0908 23:54:29.734316 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:54:29.734451 kubelet[2130]: E0908 23:54:29.734416 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:54:29.793598 kubelet[2130]: E0908 23:54:29.792455 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="400ms" Sep 8 23:54:29.835524 kubelet[2130]: I0908 23:54:29.835478 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:29.835906 kubelet[2130]: E0908 23:54:29.835882 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Sep 8 23:54:29.915860 systemd[1]: Created slice kubepods-burstable-podd92d1d99c1d484197c61574577f73f02.slice - libcontainer container kubepods-burstable-podd92d1d99c1d484197c61574577f73f02.slice. Sep 8 23:54:29.926908 kubelet[2130]: E0908 23:54:29.926830 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:29.928576 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 8 23:54:29.939251 kubelet[2130]: E0908 23:54:29.939214 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:29.941386 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 8 23:54:29.942957 kubelet[2130]: E0908 23:54:29.942938 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:30.037458 kubelet[2130]: I0908 23:54:30.037423 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:30.037799 kubelet[2130]: E0908 23:54:30.037774 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Sep 8 23:54:30.090843 kubelet[2130]: I0908 23:54:30.090570 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:30.090843 kubelet[2130]: I0908 23:54:30.090604 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:30.090843 kubelet[2130]: I0908 23:54:30.090623 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:30.090843 kubelet[2130]: I0908 23:54:30.090642 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:30.090843 kubelet[2130]: I0908 23:54:30.090660 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:30.091006 kubelet[2130]: I0908 23:54:30.090676 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:30.091006 kubelet[2130]: I0908 23:54:30.090700 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:30.091006 kubelet[2130]: I0908 23:54:30.090716 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:30.091006 kubelet[2130]: I0908 23:54:30.090738 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:30.193532 kubelet[2130]: E0908 23:54:30.193474 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="800ms" Sep 8 23:54:30.228607 containerd[1447]: time="2025-09-08T23:54:30.228556081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d92d1d99c1d484197c61574577f73f02,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:30.242626 containerd[1447]: time="2025-09-08T23:54:30.241065055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:30.244248 containerd[1447]: time="2025-09-08T23:54:30.244006901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:30.439474 kubelet[2130]: I0908 23:54:30.439349 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:30.440060 kubelet[2130]: E0908 23:54:30.440031 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Sep 8 23:54:30.597435 kubelet[2130]: W0908 23:54:30.597380 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:30.597435 kubelet[2130]: E0908 23:54:30.597436 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:30.857091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446966857.mount: Deactivated successfully. Sep 8 23:54:30.864314 containerd[1447]: time="2025-09-08T23:54:30.864261030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:30.865166 containerd[1447]: time="2025-09-08T23:54:30.865054421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 8 23:54:30.865862 containerd[1447]: time="2025-09-08T23:54:30.865825793Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:30.868672 containerd[1447]: time="2025-09-08T23:54:30.868605703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:30.869672 containerd[1447]: time="2025-09-08T23:54:30.869619560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:54:30.871921 containerd[1447]: time="2025-09-08T23:54:30.871774862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:30.872430 containerd[1447]: time="2025-09-08T23:54:30.872373247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 643.731774ms" Sep 8 23:54:30.873963 containerd[1447]: time="2025-09-08T23:54:30.873918153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:54:30.874734 containerd[1447]: time="2025-09-08T23:54:30.874689165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:54:30.877648 containerd[1447]: time="2025-09-08T23:54:30.877049200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.661868ms" Sep 8 23:54:30.883329 containerd[1447]: time="2025-09-08T23:54:30.883008557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 638.941365ms" Sep 8 23:54:30.948583 kubelet[2130]: W0908 23:54:30.948504 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:30.948583 kubelet[2130]: E0908 23:54:30.948584 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:30.994464 kubelet[2130]: E0908 23:54:30.994184 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.99:6443: connect: connection refused" interval="1.6s" Sep 8 23:54:31.008983 containerd[1447]: time="2025-09-08T23:54:31.008890760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:31.008983 containerd[1447]: time="2025-09-08T23:54:31.008947282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:31.008983 containerd[1447]: time="2025-09-08T23:54:31.008958450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.009949 containerd[1447]: time="2025-09-08T23:54:31.009666534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:31.009949 containerd[1447]: time="2025-09-08T23:54:31.009726858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:31.009949 containerd[1447]: time="2025-09-08T23:54:31.009744551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.009949 containerd[1447]: time="2025-09-08T23:54:31.009821848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.009949 containerd[1447]: time="2025-09-08T23:54:31.009265757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.013808 containerd[1447]: time="2025-09-08T23:54:31.013650521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:31.013808 containerd[1447]: time="2025-09-08T23:54:31.013702999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:31.013808 containerd[1447]: time="2025-09-08T23:54:31.013715008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.013949 containerd[1447]: time="2025-09-08T23:54:31.013790024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:31.039339 systemd[1]: Started cri-containerd-94bf6e79b16189994d24854d5ee204d6371ccbaa9395faf1248cf86b907a13e8.scope - libcontainer container 94bf6e79b16189994d24854d5ee204d6371ccbaa9395faf1248cf86b907a13e8. Sep 8 23:54:31.040563 systemd[1]: Started cri-containerd-ad3c17fdaab0b0d3e93a86816ff58d113b7455db4fe3bdaac2ceb9c4dd8bc845.scope - libcontainer container ad3c17fdaab0b0d3e93a86816ff58d113b7455db4fe3bdaac2ceb9c4dd8bc845. Sep 8 23:54:31.041938 systemd[1]: Started cri-containerd-ef5571389e7bc3f61e02d0da954558a7f4f3223acda8cd004f9093678a055016.scope - libcontainer container ef5571389e7bc3f61e02d0da954558a7f4f3223acda8cd004f9093678a055016. Sep 8 23:54:31.049679 kubelet[2130]: W0908 23:54:31.049539 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:31.049679 kubelet[2130]: E0908 23:54:31.049625 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:31.081093 containerd[1447]: time="2025-09-08T23:54:31.080503812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"94bf6e79b16189994d24854d5ee204d6371ccbaa9395faf1248cf86b907a13e8\"" Sep 8 23:54:31.083715 containerd[1447]: time="2025-09-08T23:54:31.083674917Z" level=info msg="CreateContainer within sandbox \"94bf6e79b16189994d24854d5ee204d6371ccbaa9395faf1248cf86b907a13e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:54:31.085619 containerd[1447]: time="2025-09-08T23:54:31.085577525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3c17fdaab0b0d3e93a86816ff58d113b7455db4fe3bdaac2ceb9c4dd8bc845\"" Sep 8 23:54:31.088457 containerd[1447]: time="2025-09-08T23:54:31.088418626Z" level=info msg="CreateContainer within sandbox \"ad3c17fdaab0b0d3e93a86816ff58d113b7455db4fe3bdaac2ceb9c4dd8bc845\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:54:31.092561 containerd[1447]: time="2025-09-08T23:54:31.092499405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d92d1d99c1d484197c61574577f73f02,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef5571389e7bc3f61e02d0da954558a7f4f3223acda8cd004f9093678a055016\"" Sep 8 23:54:31.095006 containerd[1447]: time="2025-09-08T23:54:31.094974796Z" level=info msg="CreateContainer within sandbox \"ef5571389e7bc3f61e02d0da954558a7f4f3223acda8cd004f9093678a055016\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:54:31.103239 containerd[1447]: time="2025-09-08T23:54:31.103076149Z" level=info msg="CreateContainer within sandbox \"94bf6e79b16189994d24854d5ee204d6371ccbaa9395faf1248cf86b907a13e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d9116e100c48b08c6efda177fa4b9443873ff80fc84a8a3bed918fa826ebd60\"" Sep 8 23:54:31.103985 containerd[1447]: time="2025-09-08T23:54:31.103957040Z" level=info msg="StartContainer for \"4d9116e100c48b08c6efda177fa4b9443873ff80fc84a8a3bed918fa826ebd60\"" Sep 8 23:54:31.107389 kubelet[2130]: W0908 23:54:31.107333 2130 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.99:6443: connect: connection refused Sep 8 23:54:31.107597 kubelet[2130]: E0908 23:54:31.107528 2130 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.99:6443: connect: connection refused" logger="UnhandledError" Sep 8 23:54:31.108684 containerd[1447]: time="2025-09-08T23:54:31.108498159Z" level=info msg="CreateContainer within sandbox \"ad3c17fdaab0b0d3e93a86816ff58d113b7455db4fe3bdaac2ceb9c4dd8bc845\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a31a11700ec5513da29c71cb03fcea3b72b82aa5b7242849953546d772a369b8\"" Sep 8 23:54:31.109722 containerd[1447]: time="2025-09-08T23:54:31.109685117Z" level=info msg="StartContainer for \"a31a11700ec5513da29c71cb03fcea3b72b82aa5b7242849953546d772a369b8\"" Sep 8 23:54:31.112238 containerd[1447]: time="2025-09-08T23:54:31.112171276Z" level=info msg="CreateContainer within sandbox \"ef5571389e7bc3f61e02d0da954558a7f4f3223acda8cd004f9093678a055016\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ee0b5994f59066b2e8058cfe6c9f4d94e781a0f0e02c367ecc1c6a2d93233db\"" Sep 8 23:54:31.113918 containerd[1447]: time="2025-09-08T23:54:31.112761913Z" level=info msg="StartContainer for \"5ee0b5994f59066b2e8058cfe6c9f4d94e781a0f0e02c367ecc1c6a2d93233db\"" Sep 8 23:54:31.134339 systemd[1]: Started cri-containerd-4d9116e100c48b08c6efda177fa4b9443873ff80fc84a8a3bed918fa826ebd60.scope - libcontainer container 4d9116e100c48b08c6efda177fa4b9443873ff80fc84a8a3bed918fa826ebd60. Sep 8 23:54:31.139692 systemd[1]: Started cri-containerd-5ee0b5994f59066b2e8058cfe6c9f4d94e781a0f0e02c367ecc1c6a2d93233db.scope - libcontainer container 5ee0b5994f59066b2e8058cfe6c9f4d94e781a0f0e02c367ecc1c6a2d93233db. Sep 8 23:54:31.141315 systemd[1]: Started cri-containerd-a31a11700ec5513da29c71cb03fcea3b72b82aa5b7242849953546d772a369b8.scope - libcontainer container a31a11700ec5513da29c71cb03fcea3b72b82aa5b7242849953546d772a369b8. Sep 8 23:54:31.189502 containerd[1447]: time="2025-09-08T23:54:31.189297606Z" level=info msg="StartContainer for \"a31a11700ec5513da29c71cb03fcea3b72b82aa5b7242849953546d772a369b8\" returns successfully" Sep 8 23:54:31.189502 containerd[1447]: time="2025-09-08T23:54:31.189353047Z" level=info msg="StartContainer for \"4d9116e100c48b08c6efda177fa4b9443873ff80fc84a8a3bed918fa826ebd60\" returns successfully" Sep 8 23:54:31.189502 containerd[1447]: time="2025-09-08T23:54:31.189438511Z" level=info msg="StartContainer for \"5ee0b5994f59066b2e8058cfe6c9f4d94e781a0f0e02c367ecc1c6a2d93233db\" returns successfully" Sep 8 23:54:31.242684 kubelet[2130]: I0908 23:54:31.242270 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:31.242684 kubelet[2130]: E0908 23:54:31.242615 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.99:6443/api/v1/nodes\": dial tcp 10.0.0.99:6443: connect: connection refused" node="localhost" Sep 8 23:54:31.624782 kubelet[2130]: E0908 23:54:31.624747 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:31.627498 kubelet[2130]: E0908 23:54:31.627466 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:31.628894 kubelet[2130]: E0908 23:54:31.628873 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:32.629071 kubelet[2130]: E0908 23:54:32.628996 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:54:32.633486 kubelet[2130]: E0908 23:54:32.633455 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:32.633809 kubelet[2130]: E0908 23:54:32.633786 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:32.844308 kubelet[2130]: I0908 23:54:32.844263 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:32.867540 kubelet[2130]: I0908 23:54:32.867355 2130 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:54:32.867540 kubelet[2130]: E0908 23:54:32.867393 2130 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 8 23:54:32.883500 kubelet[2130]: E0908 23:54:32.883412 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:32.984293 kubelet[2130]: E0908 23:54:32.984243 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.085082 kubelet[2130]: E0908 23:54:33.085034 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.116843 kubelet[2130]: E0908 23:54:33.116816 2130 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:54:33.186112 kubelet[2130]: E0908 23:54:33.186005 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.286834 kubelet[2130]: E0908 23:54:33.286789 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.387613 kubelet[2130]: E0908 23:54:33.387566 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.488298 kubelet[2130]: E0908 23:54:33.488171 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.589022 kubelet[2130]: E0908 23:54:33.588964 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.690153 kubelet[2130]: E0908 23:54:33.690092 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:33.790788 kubelet[2130]: I0908 23:54:33.790563 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:33.801200 kubelet[2130]: I0908 23:54:33.801128 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:33.805419 kubelet[2130]: I0908 23:54:33.805382 2130 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:34.584312 kubelet[2130]: I0908 23:54:34.584272 2130 apiserver.go:52] "Watching apiserver" Sep 8 23:54:34.590174 kubelet[2130]: I0908 23:54:34.590129 2130 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:54:34.761442 systemd[1]: Reload requested from client PID 2414 ('systemctl') (unit session-5.scope)... Sep 8 23:54:34.761461 systemd[1]: Reloading... Sep 8 23:54:34.832199 zram_generator::config[2459]: No configuration found. Sep 8 23:54:34.938070 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:54:35.037482 systemd[1]: Reloading finished in 275 ms. Sep 8 23:54:35.056519 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:35.074133 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:54:35.074424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:35.074483 systemd[1]: kubelet.service: Consumed 1.603s CPU time, 131.5M memory peak. Sep 8 23:54:35.084475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:54:35.195984 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:54:35.200887 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:54:35.246962 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:35.246962 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:54:35.246962 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:54:35.247463 kubelet[2500]: I0908 23:54:35.247045 2500 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:54:35.256168 kubelet[2500]: I0908 23:54:35.255542 2500 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:54:35.256168 kubelet[2500]: I0908 23:54:35.255574 2500 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:54:35.256168 kubelet[2500]: I0908 23:54:35.255830 2500 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:54:35.257740 kubelet[2500]: I0908 23:54:35.257716 2500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 8 23:54:35.260694 kubelet[2500]: I0908 23:54:35.260652 2500 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:54:35.263941 kubelet[2500]: E0908 23:54:35.263911 2500 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:54:35.263941 kubelet[2500]: I0908 23:54:35.263940 2500 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:54:35.268510 kubelet[2500]: I0908 23:54:35.268485 2500 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:54:35.268689 kubelet[2500]: I0908 23:54:35.268664 2500 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:54:35.268845 kubelet[2500]: I0908 23:54:35.268691 2500 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:54:35.268917 kubelet[2500]: I0908 23:54:35.268856 2500 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:54:35.268917 kubelet[2500]: I0908 23:54:35.268864 2500 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:54:35.268917 kubelet[2500]: I0908 23:54:35.268902 2500 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:35.269042 kubelet[2500]: I0908 23:54:35.269029 2500 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:54:35.269072 kubelet[2500]: I0908 23:54:35.269046 2500 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:54:35.269072 kubelet[2500]: I0908 23:54:35.269062 2500 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:54:35.269072 kubelet[2500]: I0908 23:54:35.269072 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:54:35.272680 kubelet[2500]: I0908 23:54:35.272657 2500 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:54:35.273203 kubelet[2500]: I0908 23:54:35.273102 2500 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:54:35.274084 kubelet[2500]: I0908 23:54:35.273502 2500 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:54:35.274084 kubelet[2500]: I0908 23:54:35.273537 2500 server.go:1287] "Started kubelet" Sep 8 23:54:35.274084 kubelet[2500]: I0908 23:54:35.273946 2500 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:54:35.274084 kubelet[2500]: I0908 23:54:35.274044 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:54:35.275426 kubelet[2500]: I0908 23:54:35.275399 2500 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:54:35.275962 kubelet[2500]: I0908 23:54:35.275931 2500 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:54:35.278351 kubelet[2500]: I0908 23:54:35.278323 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:54:35.278643 kubelet[2500]: I0908 23:54:35.278620 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:54:35.280053 kubelet[2500]: E0908 23:54:35.280030 2500 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:54:35.280094 kubelet[2500]: I0908 23:54:35.280068 2500 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:54:35.283184 kubelet[2500]: I0908 23:54:35.282706 2500 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:54:35.283184 kubelet[2500]: I0908 23:54:35.282850 2500 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:54:35.287910 kubelet[2500]: I0908 23:54:35.287879 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:54:35.288472 kubelet[2500]: I0908 23:54:35.288278 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:54:35.289277 kubelet[2500]: I0908 23:54:35.289256 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:54:35.289385 kubelet[2500]: I0908 23:54:35.289373 2500 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:54:35.289473 kubelet[2500]: I0908 23:54:35.289458 2500 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:54:35.289546 kubelet[2500]: I0908 23:54:35.289537 2500 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:54:35.289845 kubelet[2500]: E0908 23:54:35.289824 2500 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:54:35.302865 kubelet[2500]: I0908 23:54:35.302807 2500 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:54:35.302865 kubelet[2500]: I0908 23:54:35.302845 2500 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:54:35.305371 kubelet[2500]: E0908 23:54:35.305337 2500 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:54:35.341061 kubelet[2500]: I0908 23:54:35.341031 2500 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:54:35.341061 kubelet[2500]: I0908 23:54:35.341052 2500 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:54:35.341061 kubelet[2500]: I0908 23:54:35.341073 2500 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:54:35.341360 kubelet[2500]: I0908 23:54:35.341264 2500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:54:35.341360 kubelet[2500]: I0908 23:54:35.341281 2500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:54:35.341360 kubelet[2500]: I0908 23:54:35.341303 2500 policy_none.go:49] "None policy: Start" Sep 8 23:54:35.341360 kubelet[2500]: I0908 23:54:35.341312 2500 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:54:35.341360 kubelet[2500]: I0908 23:54:35.341321 2500 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:54:35.341490 kubelet[2500]: I0908 23:54:35.341422 2500 state_mem.go:75] "Updated machine memory state" Sep 8 23:54:35.345798 kubelet[2500]: I0908 23:54:35.345766 2500 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:54:35.346749 kubelet[2500]: I0908 23:54:35.345973 2500 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:54:35.346749 kubelet[2500]: I0908 23:54:35.345992 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:54:35.346749 kubelet[2500]: I0908 23:54:35.346324 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:54:35.347217 kubelet[2500]: E0908 23:54:35.347084 2500 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:54:35.390800 kubelet[2500]: I0908 23:54:35.390763 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:35.390800 kubelet[2500]: I0908 23:54:35.390791 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:35.390982 kubelet[2500]: I0908 23:54:35.390865 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.396217 kubelet[2500]: E0908 23:54:35.396181 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:35.396366 kubelet[2500]: E0908 23:54:35.396182 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.396573 kubelet[2500]: E0908 23:54:35.396551 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:35.448121 kubelet[2500]: I0908 23:54:35.447995 2500 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:54:35.473835 kubelet[2500]: I0908 23:54:35.473792 2500 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:54:35.473949 kubelet[2500]: I0908 23:54:35.473891 2500 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:54:35.583851 kubelet[2500]: I0908 23:54:35.583790 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.583851 kubelet[2500]: I0908 23:54:35.583831 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.583851 kubelet[2500]: I0908 23:54:35.583855 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:35.584045 kubelet[2500]: I0908 23:54:35.583875 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:35.584045 kubelet[2500]: I0908 23:54:35.583890 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:35.584045 kubelet[2500]: I0908 23:54:35.583906 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.584045 kubelet[2500]: I0908 23:54:35.583920 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:35.584045 kubelet[2500]: I0908 23:54:35.583938 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d92d1d99c1d484197c61574577f73f02-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d92d1d99c1d484197c61574577f73f02\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:35.584190 kubelet[2500]: I0908 23:54:35.583953 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:36.269781 kubelet[2500]: I0908 23:54:36.269725 2500 apiserver.go:52] "Watching apiserver" Sep 8 23:54:36.283472 kubelet[2500]: I0908 23:54:36.283427 2500 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:54:36.317180 kubelet[2500]: I0908 23:54:36.317095 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:36.317294 kubelet[2500]: I0908 23:54:36.317229 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:36.317294 kubelet[2500]: I0908 23:54:36.317272 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:36.325815 kubelet[2500]: E0908 23:54:36.325776 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:54:36.326050 kubelet[2500]: E0908 23:54:36.326031 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:54:36.326192 kubelet[2500]: E0908 23:54:36.326172 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:54:36.346952 kubelet[2500]: I0908 23:54:36.346867 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.346846543 podStartE2EDuration="3.346846543s" podCreationTimestamp="2025-09-08 23:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:36.338648671 +0000 UTC m=+1.133395378" watchObservedRunningTime="2025-09-08 23:54:36.346846543 +0000 UTC m=+1.141593250" Sep 8 23:54:36.364957 kubelet[2500]: I0908 23:54:36.364879 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.36486014 podStartE2EDuration="3.36486014s" podCreationTimestamp="2025-09-08 23:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:36.347026571 +0000 UTC m=+1.141773278" watchObservedRunningTime="2025-09-08 23:54:36.36486014 +0000 UTC m=+1.159606847" Sep 8 23:54:36.483027 kubelet[2500]: I0908 23:54:36.482587 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.482567374 podStartE2EDuration="3.482567374s" podCreationTimestamp="2025-09-08 23:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:36.364839412 +0000 UTC m=+1.159586119" watchObservedRunningTime="2025-09-08 23:54:36.482567374 +0000 UTC m=+1.277314081" Sep 8 23:54:36.486561 sudo[1592]: pam_unix(sudo:session): session closed for user root Sep 8 23:54:36.488189 sshd[1591]: Connection closed by 10.0.0.1 port 59750 Sep 8 23:54:36.488790 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:36.492353 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:54:36.492458 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:54:36.494256 systemd[1]: session-5.scope: Consumed 5.822s CPU time, 219.9M memory peak. Sep 8 23:54:36.495375 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:59750.service: Deactivated successfully. Sep 8 23:54:36.499712 systemd-logind[1433]: Removed session 5. Sep 8 23:54:38.665573 kubelet[2500]: I0908 23:54:38.665532 2500 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:54:38.666631 containerd[1447]: time="2025-09-08T23:54:38.666318316Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:54:38.667383 kubelet[2500]: I0908 23:54:38.667108 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:54:39.646534 systemd[1]: Created slice kubepods-besteffort-podbfde636f_7edf_466a_b628_21d578272bdd.slice - libcontainer container kubepods-besteffort-podbfde636f_7edf_466a_b628_21d578272bdd.slice. Sep 8 23:54:39.660007 systemd[1]: Created slice kubepods-burstable-pod256c16e5_fb3a_4036_8e27_678efa9f838d.slice - libcontainer container kubepods-burstable-pod256c16e5_fb3a_4036_8e27_678efa9f838d.slice. Sep 8 23:54:39.710004 kubelet[2500]: I0908 23:54:39.709163 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfde636f-7edf-466a-b628-21d578272bdd-xtables-lock\") pod \"kube-proxy-h4lnv\" (UID: \"bfde636f-7edf-466a-b628-21d578272bdd\") " pod="kube-system/kube-proxy-h4lnv" Sep 8 23:54:39.710004 kubelet[2500]: I0908 23:54:39.709205 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/256c16e5-fb3a-4036-8e27-678efa9f838d-xtables-lock\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.710004 kubelet[2500]: I0908 23:54:39.709233 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bpg\" (UniqueName: \"kubernetes.io/projected/256c16e5-fb3a-4036-8e27-678efa9f838d-kube-api-access-74bpg\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.710004 kubelet[2500]: I0908 23:54:39.709273 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfde636f-7edf-466a-b628-21d578272bdd-kube-proxy\") pod \"kube-proxy-h4lnv\" (UID: \"bfde636f-7edf-466a-b628-21d578272bdd\") " pod="kube-system/kube-proxy-h4lnv" Sep 8 23:54:39.710004 kubelet[2500]: I0908 23:54:39.709290 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kvwh\" (UniqueName: \"kubernetes.io/projected/bfde636f-7edf-466a-b628-21d578272bdd-kube-api-access-5kvwh\") pod \"kube-proxy-h4lnv\" (UID: \"bfde636f-7edf-466a-b628-21d578272bdd\") " pod="kube-system/kube-proxy-h4lnv" Sep 8 23:54:39.710482 kubelet[2500]: I0908 23:54:39.709318 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/256c16e5-fb3a-4036-8e27-678efa9f838d-flannel-cfg\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.710482 kubelet[2500]: I0908 23:54:39.709333 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/256c16e5-fb3a-4036-8e27-678efa9f838d-cni\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.710482 kubelet[2500]: I0908 23:54:39.709351 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfde636f-7edf-466a-b628-21d578272bdd-lib-modules\") pod \"kube-proxy-h4lnv\" (UID: \"bfde636f-7edf-466a-b628-21d578272bdd\") " pod="kube-system/kube-proxy-h4lnv" Sep 8 23:54:39.710482 kubelet[2500]: I0908 23:54:39.709370 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/256c16e5-fb3a-4036-8e27-678efa9f838d-run\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.710482 kubelet[2500]: I0908 23:54:39.709405 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/256c16e5-fb3a-4036-8e27-678efa9f838d-cni-plugin\") pod \"kube-flannel-ds-lkmb5\" (UID: \"256c16e5-fb3a-4036-8e27-678efa9f838d\") " pod="kube-flannel/kube-flannel-ds-lkmb5" Sep 8 23:54:39.955873 containerd[1447]: time="2025-09-08T23:54:39.955753567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4lnv,Uid:bfde636f-7edf-466a-b628-21d578272bdd,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:39.963620 containerd[1447]: time="2025-09-08T23:54:39.963537465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lkmb5,Uid:256c16e5-fb3a-4036-8e27-678efa9f838d,Namespace:kube-flannel,Attempt:0,}" Sep 8 23:54:39.984585 containerd[1447]: time="2025-09-08T23:54:39.984495558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:39.984585 containerd[1447]: time="2025-09-08T23:54:39.984549854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:39.984585 containerd[1447]: time="2025-09-08T23:54:39.984561138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:39.984822 containerd[1447]: time="2025-09-08T23:54:39.984643442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:40.000464 containerd[1447]: time="2025-09-08T23:54:39.998751639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:40.000464 containerd[1447]: time="2025-09-08T23:54:39.998823220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:40.000464 containerd[1447]: time="2025-09-08T23:54:39.998837865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:40.000464 containerd[1447]: time="2025-09-08T23:54:39.998918649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:40.003361 systemd[1]: Started cri-containerd-ffeec5705e8aafd2916c27a1516cd1125f3864d0d5d77a82ce7e28183c3a4fc6.scope - libcontainer container ffeec5705e8aafd2916c27a1516cd1125f3864d0d5d77a82ce7e28183c3a4fc6. Sep 8 23:54:40.015063 systemd[1]: Started cri-containerd-6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9.scope - libcontainer container 6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9. Sep 8 23:54:40.028947 containerd[1447]: time="2025-09-08T23:54:40.028808967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4lnv,Uid:bfde636f-7edf-466a-b628-21d578272bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffeec5705e8aafd2916c27a1516cd1125f3864d0d5d77a82ce7e28183c3a4fc6\"" Sep 8 23:54:40.033752 containerd[1447]: time="2025-09-08T23:54:40.033708477Z" level=info msg="CreateContainer within sandbox \"ffeec5705e8aafd2916c27a1516cd1125f3864d0d5d77a82ce7e28183c3a4fc6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:54:40.049024 containerd[1447]: time="2025-09-08T23:54:40.048951442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-lkmb5,Uid:256c16e5-fb3a-4036-8e27-678efa9f838d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\"" Sep 8 23:54:40.051848 containerd[1447]: time="2025-09-08T23:54:40.051653088Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Sep 8 23:54:40.055394 containerd[1447]: time="2025-09-08T23:54:40.055257911Z" level=info msg="CreateContainer within sandbox \"ffeec5705e8aafd2916c27a1516cd1125f3864d0d5d77a82ce7e28183c3a4fc6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4d251caab87aa0950349c86da8283a12d64ca2600d4ab41fc9ab3bd94b1f0f18\"" Sep 8 23:54:40.056249 containerd[1447]: time="2025-09-08T23:54:40.056190536Z" level=info msg="StartContainer for \"4d251caab87aa0950349c86da8283a12d64ca2600d4ab41fc9ab3bd94b1f0f18\"" Sep 8 23:54:40.086376 systemd[1]: Started cri-containerd-4d251caab87aa0950349c86da8283a12d64ca2600d4ab41fc9ab3bd94b1f0f18.scope - libcontainer container 4d251caab87aa0950349c86da8283a12d64ca2600d4ab41fc9ab3bd94b1f0f18. Sep 8 23:54:40.114943 containerd[1447]: time="2025-09-08T23:54:40.114723143Z" level=info msg="StartContainer for \"4d251caab87aa0950349c86da8283a12d64ca2600d4ab41fc9ab3bd94b1f0f18\" returns successfully" Sep 8 23:54:40.339818 kubelet[2500]: I0908 23:54:40.339652 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h4lnv" podStartSLOduration=1.339634477 podStartE2EDuration="1.339634477s" podCreationTimestamp="2025-09-08 23:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:40.338666402 +0000 UTC m=+5.133413109" watchObservedRunningTime="2025-09-08 23:54:40.339634477 +0000 UTC m=+5.134381184" Sep 8 23:54:41.188114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039729409.mount: Deactivated successfully. Sep 8 23:54:41.224966 containerd[1447]: time="2025-09-08T23:54:41.224086513Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:41.226172 containerd[1447]: time="2025-09-08T23:54:41.226086009Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Sep 8 23:54:41.227559 containerd[1447]: time="2025-09-08T23:54:41.227495708Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:41.231641 containerd[1447]: time="2025-09-08T23:54:41.231601929Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:41.233374 containerd[1447]: time="2025-09-08T23:54:41.233322710Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 1.181629691s" Sep 8 23:54:41.233374 containerd[1447]: time="2025-09-08T23:54:41.233363561Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Sep 8 23:54:41.235674 containerd[1447]: time="2025-09-08T23:54:41.235635651Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Sep 8 23:54:41.249549 containerd[1447]: time="2025-09-08T23:54:41.249486086Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232\"" Sep 8 23:54:41.250354 containerd[1447]: time="2025-09-08T23:54:41.250204358Z" level=info msg="StartContainer for \"aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232\"" Sep 8 23:54:41.278364 systemd[1]: Started cri-containerd-aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232.scope - libcontainer container aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232. Sep 8 23:54:41.311124 systemd[1]: cri-containerd-aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232.scope: Deactivated successfully. Sep 8 23:54:41.321048 containerd[1447]: time="2025-09-08T23:54:41.320995105Z" level=info msg="StartContainer for \"aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232\" returns successfully" Sep 8 23:54:41.355970 containerd[1447]: time="2025-09-08T23:54:41.355909709Z" level=info msg="shim disconnected" id=aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232 namespace=k8s.io Sep 8 23:54:41.355970 containerd[1447]: time="2025-09-08T23:54:41.355965524Z" level=warning msg="cleaning up after shim disconnected" id=aa4a49e615a2b6f2c75aa474d9db4ebde05f9b37c5c8ac6756e42fa06ad14232 namespace=k8s.io Sep 8 23:54:41.355970 containerd[1447]: time="2025-09-08T23:54:41.355976887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:42.336912 containerd[1447]: time="2025-09-08T23:54:42.336679719Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Sep 8 23:54:43.461539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839014196.mount: Deactivated successfully. Sep 8 23:54:44.946194 containerd[1447]: time="2025-09-08T23:54:44.945637517Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:44.946591 containerd[1447]: time="2025-09-08T23:54:44.946266420Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Sep 8 23:54:44.955828 containerd[1447]: time="2025-09-08T23:54:44.955770179Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:44.958925 containerd[1447]: time="2025-09-08T23:54:44.958874765Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:54:44.960318 containerd[1447]: time="2025-09-08T23:54:44.960219190Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 2.623500982s" Sep 8 23:54:44.960318 containerd[1447]: time="2025-09-08T23:54:44.960250357Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Sep 8 23:54:44.963576 containerd[1447]: time="2025-09-08T23:54:44.963432400Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 8 23:54:44.976363 containerd[1447]: time="2025-09-08T23:54:44.976320329Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d\"" Sep 8 23:54:44.976912 containerd[1447]: time="2025-09-08T23:54:44.976872054Z" level=info msg="StartContainer for \"032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d\"" Sep 8 23:54:45.008341 systemd[1]: Started cri-containerd-032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d.scope - libcontainer container 032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d. Sep 8 23:54:45.033318 systemd[1]: cri-containerd-032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d.scope: Deactivated successfully. Sep 8 23:54:45.034326 containerd[1447]: time="2025-09-08T23:54:45.034279981Z" level=info msg="StartContainer for \"032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d\" returns successfully" Sep 8 23:54:45.054967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d-rootfs.mount: Deactivated successfully. Sep 8 23:54:45.065352 containerd[1447]: time="2025-09-08T23:54:45.065164469Z" level=info msg="shim disconnected" id=032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d namespace=k8s.io Sep 8 23:54:45.065352 containerd[1447]: time="2025-09-08T23:54:45.065223241Z" level=warning msg="cleaning up after shim disconnected" id=032a8eedad4c7b9103236436b18cbee500b1701d657120464adc671d7bbd448d namespace=k8s.io Sep 8 23:54:45.065352 containerd[1447]: time="2025-09-08T23:54:45.065232203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:54:45.123742 kubelet[2500]: I0908 23:54:45.123711 2500 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:54:45.170686 systemd[1]: Created slice kubepods-burstable-podb89fd75d_0481_41a5_8ab1_aed0fbbb1c27.slice - libcontainer container kubepods-burstable-podb89fd75d_0481_41a5_8ab1_aed0fbbb1c27.slice. Sep 8 23:54:45.176685 systemd[1]: Created slice kubepods-burstable-pod9a0b6136_96e0_4cb9_83e0_2d57b98f7f26.slice - libcontainer container kubepods-burstable-pod9a0b6136_96e0_4cb9_83e0_2d57b98f7f26.slice. Sep 8 23:54:45.251650 kubelet[2500]: I0908 23:54:45.251512 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a0b6136-96e0-4cb9-83e0-2d57b98f7f26-config-volume\") pod \"coredns-668d6bf9bc-psnr6\" (UID: \"9a0b6136-96e0-4cb9-83e0-2d57b98f7f26\") " pod="kube-system/coredns-668d6bf9bc-psnr6" Sep 8 23:54:45.251650 kubelet[2500]: I0908 23:54:45.251563 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b89fd75d-0481-41a5-8ab1-aed0fbbb1c27-config-volume\") pod \"coredns-668d6bf9bc-svj6p\" (UID: \"b89fd75d-0481-41a5-8ab1-aed0fbbb1c27\") " pod="kube-system/coredns-668d6bf9bc-svj6p" Sep 8 23:54:45.251650 kubelet[2500]: I0908 23:54:45.251632 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2hjp\" (UniqueName: \"kubernetes.io/projected/b89fd75d-0481-41a5-8ab1-aed0fbbb1c27-kube-api-access-l2hjp\") pod \"coredns-668d6bf9bc-svj6p\" (UID: \"b89fd75d-0481-41a5-8ab1-aed0fbbb1c27\") " pod="kube-system/coredns-668d6bf9bc-svj6p" Sep 8 23:54:45.251812 kubelet[2500]: I0908 23:54:45.251664 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbb6\" (UniqueName: \"kubernetes.io/projected/9a0b6136-96e0-4cb9-83e0-2d57b98f7f26-kube-api-access-njbb6\") pod \"coredns-668d6bf9bc-psnr6\" (UID: \"9a0b6136-96e0-4cb9-83e0-2d57b98f7f26\") " pod="kube-system/coredns-668d6bf9bc-psnr6" Sep 8 23:54:45.346460 containerd[1447]: time="2025-09-08T23:54:45.346401320Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Sep 8 23:54:45.368288 containerd[1447]: time="2025-09-08T23:54:45.368154162Z" level=info msg="CreateContainer within sandbox \"6ab8973822ca7c5bb87967f76803a89efa86d4c3b52db8c307650cb6be8caed9\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6311cb622446c074d30e589ca1a9174782994ffcbdb66dee04c3722e3fc0f940\"" Sep 8 23:54:45.370146 containerd[1447]: time="2025-09-08T23:54:45.370108423Z" level=info msg="StartContainer for \"6311cb622446c074d30e589ca1a9174782994ffcbdb66dee04c3722e3fc0f940\"" Sep 8 23:54:45.400320 systemd[1]: Started cri-containerd-6311cb622446c074d30e589ca1a9174782994ffcbdb66dee04c3722e3fc0f940.scope - libcontainer container 6311cb622446c074d30e589ca1a9174782994ffcbdb66dee04c3722e3fc0f940. Sep 8 23:54:45.423659 containerd[1447]: time="2025-09-08T23:54:45.423615980Z" level=info msg="StartContainer for \"6311cb622446c074d30e589ca1a9174782994ffcbdb66dee04c3722e3fc0f940\" returns successfully" Sep 8 23:54:45.474615 containerd[1447]: time="2025-09-08T23:54:45.474571707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svj6p,Uid:b89fd75d-0481-41a5-8ab1-aed0fbbb1c27,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:45.481433 containerd[1447]: time="2025-09-08T23:54:45.481376652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-psnr6,Uid:9a0b6136-96e0-4cb9-83e0-2d57b98f7f26,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:45.518318 containerd[1447]: time="2025-09-08T23:54:45.518201258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svj6p,Uid:b89fd75d-0481-41a5-8ab1-aed0fbbb1c27,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96db243ee784c094503087ff1fdb32d91eb35530976f756469aa3d6523d5998a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:54:45.518618 kubelet[2500]: E0908 23:54:45.518451 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96db243ee784c094503087ff1fdb32d91eb35530976f756469aa3d6523d5998a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:54:45.518618 kubelet[2500]: E0908 23:54:45.518522 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96db243ee784c094503087ff1fdb32d91eb35530976f756469aa3d6523d5998a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-svj6p" Sep 8 23:54:45.518618 kubelet[2500]: E0908 23:54:45.518541 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96db243ee784c094503087ff1fdb32d91eb35530976f756469aa3d6523d5998a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-svj6p" Sep 8 23:54:45.518618 kubelet[2500]: E0908 23:54:45.518577 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-svj6p_kube-system(b89fd75d-0481-41a5-8ab1-aed0fbbb1c27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-svj6p_kube-system(b89fd75d-0481-41a5-8ab1-aed0fbbb1c27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96db243ee784c094503087ff1fdb32d91eb35530976f756469aa3d6523d5998a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-svj6p" podUID="b89fd75d-0481-41a5-8ab1-aed0fbbb1c27" Sep 8 23:54:45.524550 containerd[1447]: time="2025-09-08T23:54:45.524445682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-psnr6,Uid:9a0b6136-96e0-4cb9-83e0-2d57b98f7f26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1961253bb7aaf1c5bb05b4560b265c077a50f089783c82671fcd43053e391905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:54:45.524682 kubelet[2500]: E0908 23:54:45.524638 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1961253bb7aaf1c5bb05b4560b265c077a50f089783c82671fcd43053e391905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Sep 8 23:54:45.524716 kubelet[2500]: E0908 23:54:45.524691 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1961253bb7aaf1c5bb05b4560b265c077a50f089783c82671fcd43053e391905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-psnr6" Sep 8 23:54:45.524716 kubelet[2500]: E0908 23:54:45.524708 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1961253bb7aaf1c5bb05b4560b265c077a50f089783c82671fcd43053e391905\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-psnr6" Sep 8 23:54:45.524780 kubelet[2500]: E0908 23:54:45.524740 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-psnr6_kube-system(9a0b6136-96e0-4cb9-83e0-2d57b98f7f26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-psnr6_kube-system(9a0b6136-96e0-4cb9-83e0-2d57b98f7f26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1961253bb7aaf1c5bb05b4560b265c077a50f089783c82671fcd43053e391905\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-psnr6" podUID="9a0b6136-96e0-4cb9-83e0-2d57b98f7f26" Sep 8 23:54:45.975759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742674170.mount: Deactivated successfully. Sep 8 23:54:46.483372 systemd-networkd[1386]: flannel.1: Link UP Sep 8 23:54:46.483384 systemd-networkd[1386]: flannel.1: Gained carrier Sep 8 23:54:47.560309 systemd-networkd[1386]: flannel.1: Gained IPv6LL Sep 8 23:54:48.465847 kubelet[2500]: I0908 23:54:48.465774 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-lkmb5" podStartSLOduration=4.555103654 podStartE2EDuration="9.465756614s" podCreationTimestamp="2025-09-08 23:54:39 +0000 UTC" firstStartedPulling="2025-09-08 23:54:40.050527569 +0000 UTC m=+4.845274276" lastFinishedPulling="2025-09-08 23:54:44.961180569 +0000 UTC m=+9.755927236" observedRunningTime="2025-09-08 23:54:46.357840606 +0000 UTC m=+11.152587313" watchObservedRunningTime="2025-09-08 23:54:48.465756614 +0000 UTC m=+13.260503321" Sep 8 23:54:51.127263 update_engine[1438]: I20250908 23:54:51.127156 1438 update_attempter.cc:509] Updating boot flags... Sep 8 23:54:51.155169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3154) Sep 8 23:54:51.201266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3158) Sep 8 23:54:51.255275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3158) Sep 8 23:54:57.290904 containerd[1447]: time="2025-09-08T23:54:57.290853023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-psnr6,Uid:9a0b6136-96e0-4cb9-83e0-2d57b98f7f26,Namespace:kube-system,Attempt:0,}" Sep 8 23:54:57.319037 systemd-networkd[1386]: cni0: Link UP Sep 8 23:54:57.319046 systemd-networkd[1386]: cni0: Gained carrier Sep 8 23:54:57.321404 systemd-networkd[1386]: cni0: Lost carrier Sep 8 23:54:57.326636 systemd-networkd[1386]: veth973e725c: Link UP Sep 8 23:54:57.327278 kernel: cni0: port 1(veth973e725c) entered blocking state Sep 8 23:54:57.327334 kernel: cni0: port 1(veth973e725c) entered disabled state Sep 8 23:54:57.327351 kernel: veth973e725c: entered allmulticast mode Sep 8 23:54:57.328618 kernel: veth973e725c: entered promiscuous mode Sep 8 23:54:57.328701 kernel: cni0: port 1(veth973e725c) entered blocking state Sep 8 23:54:57.329228 kernel: cni0: port 1(veth973e725c) entered forwarding state Sep 8 23:54:57.331210 kernel: cni0: port 1(veth973e725c) entered disabled state Sep 8 23:54:57.338847 kernel: cni0: port 1(veth973e725c) entered blocking state Sep 8 23:54:57.338899 kernel: cni0: port 1(veth973e725c) entered forwarding state Sep 8 23:54:57.340349 systemd-networkd[1386]: veth973e725c: Gained carrier Sep 8 23:54:57.340641 systemd-networkd[1386]: cni0: Gained carrier Sep 8 23:54:57.344644 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Sep 8 23:54:57.344644 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:54:57.364107 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:54:57.364031598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:54:57.364107 containerd[1447]: time="2025-09-08T23:54:57.364086724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:54:57.364107 containerd[1447]: time="2025-09-08T23:54:57.364098766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:57.364281 containerd[1447]: time="2025-09-08T23:54:57.364197137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:54:57.390390 systemd[1]: Started cri-containerd-23553cc511735f9976a30ea65bed6e80add77594c539f6fa625f599d01b061b0.scope - libcontainer container 23553cc511735f9976a30ea65bed6e80add77594c539f6fa625f599d01b061b0. Sep 8 23:54:57.404732 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:54:57.427610 containerd[1447]: time="2025-09-08T23:54:57.427560111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-psnr6,Uid:9a0b6136-96e0-4cb9-83e0-2d57b98f7f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"23553cc511735f9976a30ea65bed6e80add77594c539f6fa625f599d01b061b0\"" Sep 8 23:54:57.445939 containerd[1447]: time="2025-09-08T23:54:57.444037940Z" level=info msg="CreateContainer within sandbox \"23553cc511735f9976a30ea65bed6e80add77594c539f6fa625f599d01b061b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:54:57.458757 containerd[1447]: time="2025-09-08T23:54:57.458619745Z" level=info msg="CreateContainer within sandbox \"23553cc511735f9976a30ea65bed6e80add77594c539f6fa625f599d01b061b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"523154480d870bab17c9cf22b8a898c4c4d7526e52f9b1c716faa2c54ed7e76a\"" Sep 8 23:54:57.461255 containerd[1447]: time="2025-09-08T23:54:57.460329227Z" level=info msg="StartContainer for \"523154480d870bab17c9cf22b8a898c4c4d7526e52f9b1c716faa2c54ed7e76a\"" Sep 8 23:54:57.486341 systemd[1]: Started cri-containerd-523154480d870bab17c9cf22b8a898c4c4d7526e52f9b1c716faa2c54ed7e76a.scope - libcontainer container 523154480d870bab17c9cf22b8a898c4c4d7526e52f9b1c716faa2c54ed7e76a. Sep 8 23:54:57.513344 containerd[1447]: time="2025-09-08T23:54:57.513302732Z" level=info msg="StartContainer for \"523154480d870bab17c9cf22b8a898c4c4d7526e52f9b1c716faa2c54ed7e76a\" returns successfully" Sep 8 23:54:58.395963 kubelet[2500]: I0908 23:54:58.395903 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-psnr6" podStartSLOduration=19.395876348 podStartE2EDuration="19.395876348s" podCreationTimestamp="2025-09-08 23:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:54:58.395737172 +0000 UTC m=+23.190483879" watchObservedRunningTime="2025-09-08 23:54:58.395876348 +0000 UTC m=+23.190623095" Sep 8 23:54:58.696291 systemd-networkd[1386]: cni0: Gained IPv6LL Sep 8 23:54:59.016343 systemd-networkd[1386]: veth973e725c: Gained IPv6LL Sep 8 23:54:59.797421 systemd[1]: Started sshd@5-10.0.0.99:22-10.0.0.1:46264.service - OpenSSH per-connection server daemon (10.0.0.1:46264). Sep 8 23:54:59.843661 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 46264 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:54:59.845374 sshd-session[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:54:59.850065 systemd-logind[1433]: New session 6 of user core. Sep 8 23:54:59.859379 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:54:59.988422 sshd[3335]: Connection closed by 10.0.0.1 port 46264 Sep 8 23:54:59.988796 sshd-session[3333]: pam_unix(sshd:session): session closed for user core Sep 8 23:54:59.993486 systemd[1]: sshd@5-10.0.0.99:22-10.0.0.1:46264.service: Deactivated successfully. Sep 8 23:54:59.995456 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:54:59.996260 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:54:59.997076 systemd-logind[1433]: Removed session 6. Sep 8 23:55:00.301010 containerd[1447]: time="2025-09-08T23:55:00.300593682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svj6p,Uid:b89fd75d-0481-41a5-8ab1-aed0fbbb1c27,Namespace:kube-system,Attempt:0,}" Sep 8 23:55:00.324216 kernel: cni0: port 2(veth0917be4e) entered blocking state Sep 8 23:55:00.324370 kernel: cni0: port 2(veth0917be4e) entered disabled state Sep 8 23:55:00.324388 kernel: veth0917be4e: entered allmulticast mode Sep 8 23:55:00.324401 kernel: veth0917be4e: entered promiscuous mode Sep 8 23:55:00.322039 systemd-networkd[1386]: veth0917be4e: Link UP Sep 8 23:55:00.335177 kernel: cni0: port 2(veth0917be4e) entered blocking state Sep 8 23:55:00.335310 kernel: cni0: port 2(veth0917be4e) entered forwarding state Sep 8 23:55:00.335822 systemd-networkd[1386]: veth0917be4e: Gained carrier Sep 8 23:55:00.337815 containerd[1447]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40001148e8), "name":"cbr0", "type":"bridge"} Sep 8 23:55:00.337815 containerd[1447]: delegateAdd: netconf sent to delegate plugin: Sep 8 23:55:00.364601 containerd[1447]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-09-08T23:55:00.364482066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:55:00.364601 containerd[1447]: time="2025-09-08T23:55:00.364545792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:55:00.364601 containerd[1447]: time="2025-09-08T23:55:00.364557553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:00.365072 containerd[1447]: time="2025-09-08T23:55:00.364644962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:55:00.389342 systemd[1]: Started cri-containerd-68c1bd1de2c0561513e9edfed7b87d1cef617e0035fd4b0a40f24a33afcd8ef0.scope - libcontainer container 68c1bd1de2c0561513e9edfed7b87d1cef617e0035fd4b0a40f24a33afcd8ef0. Sep 8 23:55:00.399963 systemd-resolved[1319]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:55:00.417345 containerd[1447]: time="2025-09-08T23:55:00.417306142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-svj6p,Uid:b89fd75d-0481-41a5-8ab1-aed0fbbb1c27,Namespace:kube-system,Attempt:0,} returns sandbox id \"68c1bd1de2c0561513e9edfed7b87d1cef617e0035fd4b0a40f24a33afcd8ef0\"" Sep 8 23:55:00.420870 containerd[1447]: time="2025-09-08T23:55:00.420837108Z" level=info msg="CreateContainer within sandbox \"68c1bd1de2c0561513e9edfed7b87d1cef617e0035fd4b0a40f24a33afcd8ef0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:55:00.436791 containerd[1447]: time="2025-09-08T23:55:00.436733636Z" level=info msg="CreateContainer within sandbox \"68c1bd1de2c0561513e9edfed7b87d1cef617e0035fd4b0a40f24a33afcd8ef0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e8b83df0fd9825fda94747bacdf5045b7215ab89f5dd2a2eff4721d8a4a6635\"" Sep 8 23:55:00.437785 containerd[1447]: time="2025-09-08T23:55:00.437748901Z" level=info msg="StartContainer for \"5e8b83df0fd9825fda94747bacdf5045b7215ab89f5dd2a2eff4721d8a4a6635\"" Sep 8 23:55:00.469328 systemd[1]: Started cri-containerd-5e8b83df0fd9825fda94747bacdf5045b7215ab89f5dd2a2eff4721d8a4a6635.scope - libcontainer container 5e8b83df0fd9825fda94747bacdf5045b7215ab89f5dd2a2eff4721d8a4a6635. Sep 8 23:55:00.492598 containerd[1447]: time="2025-09-08T23:55:00.492557303Z" level=info msg="StartContainer for \"5e8b83df0fd9825fda94747bacdf5045b7215ab89f5dd2a2eff4721d8a4a6635\" returns successfully" Sep 8 23:55:01.416232 kubelet[2500]: I0908 23:55:01.415075 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-svj6p" podStartSLOduration=22.415047697 podStartE2EDuration="22.415047697s" podCreationTimestamp="2025-09-08 23:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:55:01.413975471 +0000 UTC m=+26.208722178" watchObservedRunningTime="2025-09-08 23:55:01.415047697 +0000 UTC m=+26.209794444" Sep 8 23:55:01.832299 systemd-networkd[1386]: veth0917be4e: Gained IPv6LL Sep 8 23:55:05.004125 systemd[1]: Started sshd@6-10.0.0.99:22-10.0.0.1:49526.service - OpenSSH per-connection server daemon (10.0.0.1:49526). Sep 8 23:55:05.056837 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 49526 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:05.058389 sshd-session[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:05.062285 systemd-logind[1433]: New session 7 of user core. Sep 8 23:55:05.072479 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:55:05.189749 sshd[3497]: Connection closed by 10.0.0.1 port 49526 Sep 8 23:55:05.190099 sshd-session[3495]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:05.193290 systemd[1]: sshd@6-10.0.0.99:22-10.0.0.1:49526.service: Deactivated successfully. Sep 8 23:55:05.194991 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:55:05.195723 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:55:05.196733 systemd-logind[1433]: Removed session 7. Sep 8 23:55:10.205128 systemd[1]: Started sshd@7-10.0.0.99:22-10.0.0.1:45386.service - OpenSSH per-connection server daemon (10.0.0.1:45386). Sep 8 23:55:10.273390 sshd[3534]: Accepted publickey for core from 10.0.0.1 port 45386 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:10.274721 sshd-session[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:10.284531 systemd-logind[1433]: New session 8 of user core. Sep 8 23:55:10.291338 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:55:10.425261 sshd[3538]: Connection closed by 10.0.0.1 port 45386 Sep 8 23:55:10.425698 sshd-session[3534]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:10.444581 systemd[1]: sshd@7-10.0.0.99:22-10.0.0.1:45386.service: Deactivated successfully. Sep 8 23:55:10.446263 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:55:10.448106 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:55:10.449343 systemd[1]: Started sshd@8-10.0.0.99:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). Sep 8 23:55:10.450265 systemd-logind[1433]: Removed session 8. Sep 8 23:55:10.491601 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:10.492807 sshd-session[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:10.497235 systemd-logind[1433]: New session 9 of user core. Sep 8 23:55:10.506300 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:55:10.652234 sshd[3554]: Connection closed by 10.0.0.1 port 45396 Sep 8 23:55:10.652629 sshd-session[3551]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:10.667706 systemd[1]: sshd@8-10.0.0.99:22-10.0.0.1:45396.service: Deactivated successfully. Sep 8 23:55:10.673657 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:55:10.677353 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:55:10.685518 systemd[1]: Started sshd@9-10.0.0.99:22-10.0.0.1:45400.service - OpenSSH per-connection server daemon (10.0.0.1:45400). Sep 8 23:55:10.688062 systemd-logind[1433]: Removed session 9. Sep 8 23:55:10.726151 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 45400 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:10.727444 sshd-session[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:10.733534 systemd-logind[1433]: New session 10 of user core. Sep 8 23:55:10.739313 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:55:10.857892 sshd[3568]: Connection closed by 10.0.0.1 port 45400 Sep 8 23:55:10.858285 sshd-session[3565]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:10.861117 systemd[1]: sshd@9-10.0.0.99:22-10.0.0.1:45400.service: Deactivated successfully. Sep 8 23:55:10.862867 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:55:10.864197 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:55:10.865050 systemd-logind[1433]: Removed session 10. Sep 8 23:55:15.887985 systemd[1]: Started sshd@10-10.0.0.99:22-10.0.0.1:45412.service - OpenSSH per-connection server daemon (10.0.0.1:45412). Sep 8 23:55:15.933752 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 45412 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:15.935490 sshd-session[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:15.940939 systemd-logind[1433]: New session 11 of user core. Sep 8 23:55:15.949396 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:55:16.074905 sshd[3604]: Connection closed by 10.0.0.1 port 45412 Sep 8 23:55:16.074814 sshd-session[3602]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:16.087075 systemd[1]: sshd@10-10.0.0.99:22-10.0.0.1:45412.service: Deactivated successfully. Sep 8 23:55:16.090838 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:55:16.095106 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:55:16.105904 systemd[1]: Started sshd@11-10.0.0.99:22-10.0.0.1:45426.service - OpenSSH per-connection server daemon (10.0.0.1:45426). Sep 8 23:55:16.107318 systemd-logind[1433]: Removed session 11. Sep 8 23:55:16.158745 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 45426 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:16.160531 sshd-session[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:16.166433 systemd-logind[1433]: New session 12 of user core. Sep 8 23:55:16.174302 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:55:16.361090 sshd[3620]: Connection closed by 10.0.0.1 port 45426 Sep 8 23:55:16.361338 sshd-session[3616]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:16.377627 systemd[1]: sshd@11-10.0.0.99:22-10.0.0.1:45426.service: Deactivated successfully. Sep 8 23:55:16.379533 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:55:16.381184 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:55:16.382666 systemd[1]: Started sshd@12-10.0.0.99:22-10.0.0.1:45430.service - OpenSSH per-connection server daemon (10.0.0.1:45430). Sep 8 23:55:16.383419 systemd-logind[1433]: Removed session 12. Sep 8 23:55:16.429092 sshd[3630]: Accepted publickey for core from 10.0.0.1 port 45430 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:16.430327 sshd-session[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:16.434653 systemd-logind[1433]: New session 13 of user core. Sep 8 23:55:16.452351 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:55:17.072642 sshd[3633]: Connection closed by 10.0.0.1 port 45430 Sep 8 23:55:17.074687 sshd-session[3630]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:17.088553 systemd[1]: sshd@12-10.0.0.99:22-10.0.0.1:45430.service: Deactivated successfully. Sep 8 23:55:17.092552 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:55:17.097374 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:55:17.104798 systemd[1]: Started sshd@13-10.0.0.99:22-10.0.0.1:45432.service - OpenSSH per-connection server daemon (10.0.0.1:45432). Sep 8 23:55:17.106913 systemd-logind[1433]: Removed session 13. Sep 8 23:55:17.155525 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 45432 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:17.156836 sshd-session[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:17.161339 systemd-logind[1433]: New session 14 of user core. Sep 8 23:55:17.168334 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:55:17.401927 sshd[3676]: Connection closed by 10.0.0.1 port 45432 Sep 8 23:55:17.404361 sshd-session[3673]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:17.415916 systemd[1]: sshd@13-10.0.0.99:22-10.0.0.1:45432.service: Deactivated successfully. Sep 8 23:55:17.417790 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:55:17.418565 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:55:17.430536 systemd[1]: Started sshd@14-10.0.0.99:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Sep 8 23:55:17.431440 systemd-logind[1433]: Removed session 14. Sep 8 23:55:17.470617 sshd[3687]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:17.472318 sshd-session[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:17.476081 systemd-logind[1433]: New session 15 of user core. Sep 8 23:55:17.483275 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:55:17.594813 sshd[3690]: Connection closed by 10.0.0.1 port 45434 Sep 8 23:55:17.595157 sshd-session[3687]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:17.598372 systemd[1]: sshd@14-10.0.0.99:22-10.0.0.1:45434.service: Deactivated successfully. Sep 8 23:55:17.600114 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:55:17.603277 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:55:17.604136 systemd-logind[1433]: Removed session 15. Sep 8 23:55:22.623966 systemd[1]: Started sshd@15-10.0.0.99:22-10.0.0.1:51926.service - OpenSSH per-connection server daemon (10.0.0.1:51926). Sep 8 23:55:22.667786 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 51926 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:22.669606 sshd-session[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:22.674087 systemd-logind[1433]: New session 16 of user core. Sep 8 23:55:22.692345 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:55:22.819653 sshd[3728]: Connection closed by 10.0.0.1 port 51926 Sep 8 23:55:22.820051 sshd-session[3726]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:22.826566 systemd[1]: sshd@15-10.0.0.99:22-10.0.0.1:51926.service: Deactivated successfully. Sep 8 23:55:22.829792 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:55:22.832343 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:55:22.833798 systemd-logind[1433]: Removed session 16. Sep 8 23:55:27.832683 systemd[1]: Started sshd@16-10.0.0.99:22-10.0.0.1:51942.service - OpenSSH per-connection server daemon (10.0.0.1:51942). Sep 8 23:55:27.881162 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 51942 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:27.881631 sshd-session[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:27.885363 systemd-logind[1433]: New session 17 of user core. Sep 8 23:55:27.897304 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:55:28.013227 sshd[3765]: Connection closed by 10.0.0.1 port 51942 Sep 8 23:55:28.013838 sshd-session[3763]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:28.017556 systemd[1]: sshd@16-10.0.0.99:22-10.0.0.1:51942.service: Deactivated successfully. Sep 8 23:55:28.019225 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:55:28.020006 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:55:28.021650 systemd-logind[1433]: Removed session 17. Sep 8 23:55:33.026197 systemd[1]: Started sshd@17-10.0.0.99:22-10.0.0.1:42482.service - OpenSSH per-connection server daemon (10.0.0.1:42482). Sep 8 23:55:33.070231 sshd[3800]: Accepted publickey for core from 10.0.0.1 port 42482 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:55:33.071690 sshd-session[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:55:33.076218 systemd-logind[1433]: New session 18 of user core. Sep 8 23:55:33.090416 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:55:33.198357 sshd[3802]: Connection closed by 10.0.0.1 port 42482 Sep 8 23:55:33.199064 sshd-session[3800]: pam_unix(sshd:session): session closed for user core Sep 8 23:55:33.202283 systemd[1]: sshd@17-10.0.0.99:22-10.0.0.1:42482.service: Deactivated successfully. Sep 8 23:55:33.203977 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:55:33.204636 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:55:33.205383 systemd-logind[1433]: Removed session 18.